-
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CLI: Implement autocomplete popup #338
CLI: Implement autocomplete popup #338
Conversation
|
||
draw_input(&mut screen, &input)?; | ||
draw_output(&mut screen, &output)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to rework the drawing a bit, basically redrawing the entire screen every update. Now that the popup appears and changes shape as users type, without a full redraw there were artifacts hanging around in the output section of the screen.
cli/src/rich/mod.rs
Outdated
@@ -361,9 +456,137 @@ fn draw_input(screen: &mut dyn Write, input: &Input) -> io::Result<()> { | |||
Ok(()) | |||
} | |||
|
|||
fn draw_autocomplete(screen: &mut dyn Write, autocomplete: Option<&Autocomplete>) -> io::Result<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, this is great! Thank you for picking this up and running with it.
My main issue is with the styling: you're effectively using black-on-black, which I'm sure looks fine on your terminal, but there are no guarantees about the user's configuration (such is the gotcha of terminal UI, of course). For me, it looks like this:
I suggest at least partially copying the web UI:
White
onLightBlack
for the popup (and a matching backgroundBlack
onWhite
for the selected element
That looks nicer on my terminal and seems reasonable with more traditional colour schemes as well.
The JS module I'm using also highlights the typed portion, which isn't strictly necessary right now since autocomplete only searches the beginning of a word, but might be interesting from a feature parity standpoint.
For some autocomplete-related UX tweaks that were done on the frontend, see:
- Autocomplete if only one possibility is available #170
- Tab complete fills as much as possible #135
- Tab autocomplete #122
(These don't necessarily have to be implemented as part of this PR, but food for thought.)
Finally, it needs a quick pass from cargo clippy
and rustfmt
.
Oh, and don't forget to give yourself credit in data/changelog.md
!
Awesome, really appreciate the feedback. Thanks! I'll make the suggested changes in the next couple days. Tab completion and the single result thing shouldn't be a problem either. I originally had something like that working in an early draft. |
cd0322c
to
27b371a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay! Work got busy and didn't feel too much like coding in my spare time. :)
I've updated based on your review feedback (much appreciated) and implemented the extra behaviours you mentioned: tab completion and auto-complete on single term.
Ended up being pretty straightforward to do them all. Feels much more like the autocomplete popup in web now.
termion::cursor::Save, | ||
termion::color::Fg(termion::color::LightWhite), | ||
termion::color::Bg(termion::color::Blue), | ||
suggestion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if there's an easier way here.
This is a Cow<&str>
and getting a slice of it using [a..b]
isn't too easy. This does the same but feels kind janky/verbose to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest suggestion.get(input.get_text().len()..).unwrap()
?
Also, don't forget that chars()
iterates over UTF-8 codepoints, so they're not guaranteed to be 1 byte and thus won't match up with input.get_text().len()
. (This happens in a few other places as well.)
There's still a remaining edge case because autocomplete is case-insensitive and uppercase characters aren't guaranteed to be the same number of bytes as their lowercase equivalents, but I'm willing to wait for someone to notice and file a bug before tackling that particular issue.
Honestly, I'm not sure if using Cow
was a good call or not. There are a lot of static strings that really don't make sense to heap-allocate, but this is intended primarily for WASM and the most important factor is not speed but binary size. It sure does bump up the complexity a notch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest suggestion.get(input.get_text().len()..).unwrap()?
You may and I appreciate it. Thanks! Learning how to type rust wasn't super hard... learning how to write rust needs a lot more feedback like this.
don't forget that chars() iterates over UTF-8 codepoints
Ah crap yeah. I'll dig around... .chars().count()
is more often what I'm wanting. I'm a kotlin guy for the most part so these problems rarely crop up. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, I'll take a look tomorrow! I started looking just now, but it's too late for brain work. I'll post the one comment I drafted right away, though.
f60d530
to
4e716b0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, I wasn't expecting you to make it so far beyond MVP in this PR! Of course, with scope creep comes bug creep, so here we go.
Overall the styling is improved, although I note that you're still using black-on-black for the highlight. The styling also differs between the selected version of the term and description, which again is black-on-black.
From a usability standpoint, the main issue that jumps out at me is that pressing enter while the single-suggestion autofill is displayed will only run the typed portion unless the user first presses tab. (I think there was a corresponding frontend issue for this, but I can't find it offhand.)
Backspace behaviour also differs from the web interface: backspace once cancels the suggestion without affecting the typed input; subsequent backspaces delete characters and don't cause the suggestion to reappear. That's how we expect GUI text boxes to behave, but isn't necessarily essential in a TUI.
There's another behavioural quirk that should probably be implemented in a different PR, which is #95. Unfortunately, there's some reinventing of the wheel that's necessary when dealing with selected text.
termion::cursor::Save, | ||
termion::color::Fg(termion::color::LightWhite), | ||
termion::color::Bg(termion::color::Blue), | ||
suggestion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I suggest suggestion.get(input.get_text().len()..).unwrap()
?
Also, don't forget that chars()
iterates over UTF-8 codepoints, so they're not guaranteed to be 1 byte and thus won't match up with input.get_text().len()
. (This happens in a few other places as well.)
There's still a remaining edge case because autocomplete is case-insensitive and uppercase characters aren't guaranteed to be the same number of bytes as their lowercase equivalents, but I'm willing to wait for someone to notice and file a bug before tackling that particular issue.
Honestly, I'm not sure if using Cow
was a good call or not. There are a lot of static strings that really don't make sense to heap-allocate, but this is intended primarily for WASM and the most important factor is not speed but binary size. It sure does bump up the complexity a notch.
cli/src/rich/mod.rs
Outdated
let start_row = term_height - 2 - autocomplete.len() as u16; | ||
|
||
for (pos, suggestion) in autocomplete.suggestions.iter().enumerate() { | ||
let padding: String = " ".repeat(width.saturating_sub(suggestion.term.len() + suggestion.summary.len())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course Clippy is smarter than both of us.
cli/src/rich/mod.rs
Outdated
fn suggestions_share_prefix(&self, len: usize) -> bool { | ||
let prefixes = self | ||
.suggestions | ||
.iter() | ||
.map(|suggestion| { | ||
suggestion | ||
.term | ||
.chars() | ||
.take(len) | ||
.flat_map(char::to_lowercase) | ||
.collect::<String>() | ||
}) | ||
.collect::<std::collections::HashSet<_>>(); | ||
|
||
prefixes.len() == 1 | ||
} | ||
|
||
fn expand_match(self) -> Autocomplete { | ||
let Some(min_suggestion_len) = self.suggestions.iter().map(|s| s.term.len()).min() else { | ||
return self; | ||
}; | ||
|
||
let mut expanded_end = self.query.len(); | ||
while expanded_end <= min_suggestion_len && self.suggestions_share_prefix(expanded_end) { | ||
expanded_end += 1; | ||
} | ||
|
||
let additions = self | ||
.suggestions | ||
.first() | ||
.unwrap() | ||
.term | ||
.chars() | ||
.skip(self.query.len()) | ||
.take(expanded_end - self.query.len() - 1); // -1 because expanded_end ends up going one past before failing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that's a lot of allocations in a relatively hot loop. How about some iter magic?
fn suggestions_share_prefix(&self, len: usize) -> bool { | |
let prefixes = self | |
.suggestions | |
.iter() | |
.map(|suggestion| { | |
suggestion | |
.term | |
.chars() | |
.take(len) | |
.flat_map(char::to_lowercase) | |
.collect::<String>() | |
}) | |
.collect::<std::collections::HashSet<_>>(); | |
prefixes.len() == 1 | |
} | |
fn expand_match(self) -> Autocomplete { | |
let Some(min_suggestion_len) = self.suggestions.iter().map(|s| s.term.len()).min() else { | |
return self; | |
}; | |
let mut expanded_end = self.query.len(); | |
while expanded_end <= min_suggestion_len && self.suggestions_share_prefix(expanded_end) { | |
expanded_end += 1; | |
} | |
let additions = self | |
.suggestions | |
.first() | |
.unwrap() | |
.term | |
.chars() | |
.skip(self.query.len()) | |
.take(expanded_end - self.query.len() - 1); // -1 because expanded_end ends up going one past before failing | |
fn common_suggestion_prefix_len(&self) -> usize { | |
let Some(first) = self.suggestions.first() else { | |
return 0; | |
}; | |
if self.suggestions.len() == 1 { | |
return first.term.len(); | |
} | |
let query_len = self.query.len(); | |
self.suggestions | |
.iter() | |
.skip(1) | |
.fold(first.term.len(), |term_len, suggestion| { | |
first.term[query_len..term_len] | |
.char_indices() | |
.map(|(i, c)| query_len + i + c.len_utf8()) | |
.rev() | |
.skip_while(|&i| { | |
suggestion.term.get(query_len..i).map_or(true, |s_term| { | |
!first.term[query_len..i].eq_ignore_ascii_case(s_term) | |
}) | |
}) | |
.next() | |
.unwrap_or(query_len) | |
}) | |
} | |
fn expand_match(self) -> Autocomplete { | |
let additions = self.suggestions.first().and_then(|suggestion| { | |
suggestion | |
.term | |
.get(self.query.len()..self.common_suggestion_prefix_len()) | |
}); |
eq_ignore_ascii_case()
is a bit less "smart" than the UTF-8 aware eq_ci()
method used by the core, but that's not part of the core's public API and I'm not sure about exposing the utility functions at this point.
Note that this also fixes the bug wherein the original loop was overreporting by 1. I haven't written any tests for this, so it'll need validation particularly of cases where multi-byte character boundaries don't line up between different suggestions.
In retrospect, Rust was really not the right tool for this, since complex string handling in Rust is challenging and most of the project involves complex string handling of one variety or another.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool better thanks!
I had a look at how eq_ignore_ascii_case
works. In a nutshell it's 'a'.to_ascii_lowercase() == 'b'.to_ascii_lowercase()
.
That's also how to_lowercase()
works for ascii characters.
Any objections if I rework your patch to use to_lowercase()
the characters and use that for comparisons? I think that'll keep the benefits of your change here but also help make multi-byte issues go away. Considering basically everything is ascii anyway any perf impacts should be small.
complex string handling in Rust is challenging
My interview question at work involves a lot of text processing... I feel bad when candidates opt to use C++. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
str
and char
provide some of the same methods but using different methods. str::to_ascii_lowercase()
and str::to_lowercase()
both allocate; char::to_ascii_lowercase()
doesn't. char::to_lowercase()
is the odd one out: it's an utter monster (to use) that returns an Iterator
to avoid allocating while dealing with the problem where the result size is not known at compile time (and may not be the same as char::len_utf8()
either).
initiative_core::utils::CaseInsensitiveStr
is my attempt to provide the missing str::eq_ignore_case()
in a UTF-8 friendly way without allocating (although I call it eq_ci()
). If you want to go beyond ASCII support, my preference would be to add that trait to the exports from initiative_core
and use it here. It's not perfect (it goes character by character, so it'll still choke where the number of Unicode codepoints differ), but at least the bugs are consolidated in one place. 🥲
My interview question at work involves a lot of text processing... I feel bad when candidates opt to use C++. :)
For sure. I love Rust, but I'll have to dust off another language if I ever have to do whiteboard interviews again. It does make for interesting logic puzzles, but that's not really what one wants when trying to do a time-boxed interview question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Admittedly, using eq_ci()
blows the big-O of this whole show out of the water, since we're closing in on worst-case O(nl^2) if my figuring is right whatever, it's big. Although it's no different than making Rust do the heavy lifting, it's just abstracted away by std. A short-circuiting common_len_ci()
method in CaseInsensitiveStr
would be a better solution, but the yak is getting awfully bald by this point. I'll deal with that at some later point.
As commands are typed the autocomplete API will be called. The first result gets displayed in faded, italic text. The command summary also gets displayed on the right side of the input line. This will probably overlap for very narrow displays.
The entire list of autocomplete suggestions is now displayed in a menu similar to the one used by the web front end. Needed to restructure the drawing some to avoid overlap and get things updating correctly as the user types. Still need to implement navigating up and down the list of selections.
- Return usize for len() and cast as needed - maybe_create -> try_new - Better text getter and setter names on Input - Succincter unwrapping - Add padding to either side of autocomplete results Still have to address some more feedback, just getting the easy ones in first.
Better padding construction Co-authored-by: Mikkel Paulson <[email protected]>
Co-authored-by: Mikkel Paulson <[email protected]>
c554aa0
to
bcf6adc
Compare
OK, I think I've got everything:
I didn't address any of the character/string length stuff. I saw you're working on an improve case-insensitive compare. Once that's in I'd be happy to (in another PR) incorporate that here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point, I'm declaring this totally good enough. Sorry for the delay. I really appreciate you running with quite an ambitious change here.
... as soon as that test failure is fixed. |
Made the algorithm a little easier to read without requiring much extra memory. Now it just creates a vector of character iterators for each candidate, and keeps incrementing the prefix as long as the first character keeps matching.
Added an autocomplete popup similar to the one used in the web UI.