Btw, I just added optional support for Fts5’s new trigram tokenizer – it verifies trigram support before using it.
Emacs’ programmable completion API does not support cursoring of any sort and neither does ivy so there may be few other options.
Another “hack” would be to have a hard limit of N results but if a user were to hold down the down-arrow key, they would eventually reach a false “end” of matches.
I’ve been able to speed up org-roam quite a bit so far without having to resort to backend programs, 6x for cleaning up 8000+ deleted files and 2x for importing 8000+ new files, and 35% for interactive search (my improvements have already been merged into V2).
I haven’t looked at speeding up imports beyond those improvements but I suspect there’s a lot more that can be done. For instance, on my laptop Emacs can read in all 8200+ of my files in less than 1.5 seconds.
(let* ((files (seq-filter (lambda (f)
(not (or (string-match "^\\.+" f)
(file-directory-p f))))
(directory-files "~/Dropbox/org" t nil t)))
(count (length files))
(buf (get-buffer-create "TMP"))
(time (current-time)))
(with-current-buffer buf
(cl-loop for file in (directory-files "~/Dropbox/org" t nil t)
do
(if (not (or (string-match "^\\.+" file) (file-directory-p file)))
(progn
(insert-file-contents file)
(erase-buffer))))
(kill-buffer))
(list count (format "%.06f" (float-time (time-since time)))))
(8289 "1.332432")
Emacsql can insert 9000 rows on my laptop in less than 2 seconds:
(let ((con (emacsql-sqlite "/tmp/test-insert.db")))
(emacsql con [:create-table-if-not-exists test [a b c d e]])
(let* ((time (current-time))
(txn (emacsql-with-transaction con
(cl-loop for i from 1 to 9000 do
(emacsql con [:insert-into test :values ["asdfasdfasdfadsfadsfasdfasdfasdfasdfasdfadsfadsfasdfasdfasdfasdf" "asdfasdfasdfadsfadsfasdfasdfasdfasdfasdfadsfadsfasdfasdfasdfasdf" "asdfasdfasdfadsfadsfasdfasdfasdfasdfasdfadsfadsfasdfasdfasdfasdf" "asdfasdfasdfadsfadsfasdfasdfasdfasdfasdfadsfadsfasdfasdfasdfasdf" "asdfasdfasdfadsfadsfasdfasdfasdfasdfasdfadsfadsfasdfasdfasdfasdf"]]))))
(elapsed (float-time (time-since time))))
(emacsql-close con)
(format "%.06f" elapsed)))
"1.640338"
That’s less than 3 seconds for reading and updating 8200+ files, so a complete update of 9000 files should take seconds rather than minutes.
So I’m guessing there is not a need for alternate backends – we can make it much faster by making better choices in the lisp code. For instance:
-
org-roam-db-map-links calls org-element-parse-buffer which does a huge amount of needless work and creates a huge amount of needless garbage that Emacs has to clean up
- org-roam also creates a new buffer for each update instead of reusing a temporary one like I did in the first code section