Then all the performance improvements by using Go, are taken away by using Electron.
It's why Jupyter fits pretty well into VSCode/VSCodium.
> 5. Rendering your app
> Electron uses Chromium under the hood so your user sees the same on Windows, Linux and macOS. Tauri on the other hand uses the system webview: Edge Webview2 (Chromium) on Windows, WebKitGTK on Linux and WebKit on macOS. Now here comes the bad part, if you are a web developer you know that Safari (Based on WebKit) is always behind a step from every web browser. Just check out Can I Use. There is always a bug that you are not seeing from Chrome, only your dear Safari users. The same issues exists in Tauri, and you can't do anything against it, you have include polyfills. The winner has to be Electron here.
https://developer.microsoft.com/en-gb/microsoft-edge/webview...
It is called Webview2, because the first MSHTML.dll replacement was based on the original updated Edge engine, which Microsoft dropped for their own Chrome fork.
So either one cares to use portable Web development practices, or whatever Chrome does, with the side effect to increase its market share even further.
Would be interested to see where this goes.
"Zasper ... provides ... exceptional speed".
If they can just make input latency indistinguishable from vim, that's a very worthwhile value add.
I would like to use this with xeus kernel for sql (which is also native) and if this reduces the resource consumption of that setup significantly, its a big plus for me.
- The UI is over bloated and bugged, sometimes things scroll, sometimes they don't, sometimes you have to refresh the page. You cannot easily change the UI as lots of CSS parts have hard coded fixed sizes.
- The settings are all over the place, from py files in ~/.jupyter to ini files to auto generated command line parameters.
- The overall architecture is monolithic and hard to break down, jupyter proxy is a good example of the hacks you have to go to to reuse parts of jupyter
- The front end technology (Lumino) is ad hoc and cannot be reused, I had to write my own react components basically reimplementing the whole protocol, come on its 2025.
- The whole automation around nbconvert is error prone and fragile
No time to write a lengthy reply here, but I think it's worth separating legitimate like-for-like comparison with a wider feeling on the ecosystem.
This is why I moved to working with Jupyter notebooks in VS Code, there is no server to manually start.
Vscode can also connect to existing servers. This can be very useful. For instance, you can put a ton of data and CPU in a server and work with vscode on a small laptop. If network latency is low enough, this works great.
The unique feature of Zasper is that the Jupyter kernel handling is built with Go coroutines and is far superior to how it's done by JupyterLab in Python.
Zasper uses one fourth of RAM and one fourth of CPU used by Jupterlab. While Jupyterlab uses around 104.8 MB of RAM and 0.8 CPUs, Zasper uses 26.7 MB of RAM and 0.2 CPUs.
Other features like Search are slow because they are not refined.
I am building it alone fulltime and this is just the first draft. Improvements will come for sure in the near future.
I hope you liked the first draft.
Not that jupyter's team needed even more respect from the community but damn.
Also to be fair I'm also one of the Jupyter dev that agree with many points of OP, and would have pulled it into a different direction; but regardldess I will still support people wanting to go in a different direction than mine.
Genuinely curious; what mechanisms has Jupyter introduced to prevent ecosystem fragmentation?
[1] https://github.com/jupyter/nbformat
[2] https://jupyter-client.readthedocs.io/en/latest/messaging.ht...
JupyterLab supports Lumino and React widgets.
Jupyter Notebook was built on jQuery, but Notebook is now forked from JupyterLab and there's NbClassic.
Breaking the notebook extension API from Notebook to Lab unfortunately caused re-work for progress, as I recall.
jupyter-xeus/xeus is an "Implementation of the Jupyter kernel protocol in C++* https://github.com/jupyter-xeus/xeus
jupyter-xeus/xeus-python is a "Jupyter kernel for the Python programming language"* that's also what JupyterLite runs in WASM instead of ipykernel: https://github.com/jupyter-xeus/xeus-python#what-are-the-adv...
JupyterLite kernels normally run in WASM; which they are compiled to by emscripten / LLVM.
To also host WASM kernels in a go process, I just found: going: https://github.com/fizx/goingo .. https://news.ycombinator.com/item?id=26159440
Vscode and vscode.dev support wasm container runtimes now; so the Python kernel runs in WASM runs in a WASM container runs in vscode FWIU.
Vscode supports polyglot notebooks that run multiple kernels, like "vatlab/sos-notebook" and "minrk/allthekernels". Defining how to share variables between kernels is the more unsolved part AFAIU. E.g. Arrow has bindings for zero-copy sharing in multiple languages.
Cocalc, Zeppelin, Marimo notebook, Data Bricks, Google Colaboratory (Colab tools), and VSCode have different takes on notebooks with I/O in JSON.
There is no CDATA in HTML5; so HTML within an HTML based notebook format would need to escape encode binary data in cell output, too. But the notebook format is not a packaging format. So, for reproducibility of (polyglot) notebooks there must also be a requirements.txt or an environment.yml to indicate the version+platform of each dependency in Python and other languages.
repo2docker (and repo2podman) build containers by installing packages according to the first requirements .txt or environment.yml it finds according to REES Reproducible Execution Environment Standard. repo2docker includes a recent version of jupyterlab in the container.
JupyterLab does not default to HTTPS with an LetsEncrypt self-signed cert but probably should, because Jupyter is a shell that can run commands as the user that owns the Jupyter kernel process.
MoSH is another way to run a web-based remote terminal. Jupyter terminal is not built on MoSH Mobile Shell.
jupyterlab/jupyter-collaboration for real time collaboration is based on the yjs/yjs CRDT. https://github.com/jupyterlab/jupyter-collaboration
Cocalc's Time Slider tracks revisions to all files in a project; including latex manuscripts (for ArXiV), which - with Computer Modern fonts and two columns - are the typical output of scholarly collaboration on a ScholarlyArticle.
Non programmers using notebooks are usually the least qualified to make them reproducible, so better just ship the whole thing.
Docker Desktop and Podman Desktop are GUIs for running containers on Windows, Mac, and Linux.
containers become out of date quickly.
If programmer or non-programmer notebook authors do not keep versions specified in a requirements.txt upgraded, what will notify other users that they are installing old versions of software?
Are there CVEs in any of the software listed in the SBOM for a container?
There should be tests to run after upgrading notebook and notebook server dependencies.
Notes re: notebooks, reproducibility, and something better than MHTML/ZIP; https://news.ycombinator.com/item?id=35896192 , https://news.ycombinator.com/item?id=35810320
From a JEP proposing "Markdown based notebooks" https://github.com/jupyter/enhancement-proposals/pull/103#is... :
> Any new package format must support cryptographic signatures and ideally WoT identity
Any new package format for jupyter must support multiple languages, because polyglot notebooks may require multiple jupyter kernels.
Existing methods for packaging notebooks as containers and/or as WASM: jupyter-docker-stacks, repo2docker / repo2podman, jupyterlite, container2wasm
You can sign and upload a container image built with repo2docker to any OCI image registry like Docker, Quay, GitHub, GitLab, Gitea; but because Jupyter runs a command execution shell on a TCP port, users should upgrade jupyter to limit the potential for remote exploitation of security vulnerabilities.
> Non programmers using notebooks are usually the least qualified to make them reproducible, so better just ship the whole thing.
Programs should teach idempotency, testing, isolation of sources of variance, and reproducibility.
What should the UI explain to the user?
If you want your code to be more likely to run in the future, you need to add a "package" or a "package==version" string in a requirements.txt (or pyproject.toml, or an environment.yml) for each `import` statement in the code.
If you do not specify the exact versions with `package==version` or similar, when users try to install the requirements to run your notebook, they could get a newer or a different version of a package for a different operating system.
If you want to prevent MITM of package installs, you need to specify a hash for the package for this platform in the requirements.txt or similar; `package==version#sha256=adc123`.
If you want to further limit software supply chain compromise, you must check the cryptographic signatures on packages to install, and verify that you trust that key to sign that package. (This is challenging even for expert users.)
WASM containers that run jupyter but don't expose it on a TCP port may be less of a risk, but there is a performance penalty to WASM.
If you want users to be able to verify that your code runs and has the same output (is "reproducible"), you should include tests to run after upgrading notebook and notebook server dependencies.
I am really happy to see the welcoming response from the dev community.
Again, not an insult intended to them. They have their job and they do it, and I don't know much about their world either, after all. And of course you can find some data scientists who also deeply know Python. My point is merely that modeling them all generically as "Python programmers" in your head can lead to a model that makes bad predictions, which I found in my brief stint in that world can include you building tools for them that expect more out of them than they have.
I'd be keen to offer it as an alternative to Jupyter on my little GPU platform experiment.
Can I sway you to take this into a ... certain direction?
From my POV any browser based editor will be inferior to emacs (and to lesser extent vim) simply because it won't run my elisp code. While a fresh and snappier UI compared to eg jupyter would be nice, I would love to see something that integrates well with emacs out of the box.
So, perhaps it would be really nice if the backend+API was really polished as an end product itself in such a way that it could easily interface with other frontends, with remote attachment.
I could go on with my list of demands but I would be thrilled and amazed at my luck if even those two happen...
Further, I do not need a kernel to execute emacs code - I have one and it's called emacs. The point regarding executing elisp code was a cheeky way to state that I am not looking forward to finding replacement and/or porting of all the custom code - mine and others' - that my editor runs, and that no amount of "features" from a webui editor will ever replace that. Hence I also mentioned vim since over time it got customized for me as well and I wouldn't want to port that either. Nor the convenience of the terminal, which is what vim is for.
Putting that aside as with all respect and gratitude to the author, it was rather clunky in many respects - no interactive story, poor handling of sessions and remote kernels (have you tried to start one, disconnect and reconnect?), no integration with LSP, and lack of many many more features that /could/ be made.
I don't know how much use you make of jupyter kernels or mathematica notebooks or similar technologies, but in my case I explored the available landcape quite thoroughly and regularly revisit. I know what I'm looking for and EIN is/was not it.
[EDIT] I just noticed you mentioned EIN but linked to emacs-jupyer. Used that as well, of course. Ill add a bit more detail to that in sibling
Juypter has an interface and API built in. What Zasper is the reimplementation of the juypter protocol. You can see this at [1]. Juypter kernels are very different from Mathematica notebooks. Mathematica notebooks aren't related to juypter.
Juypter kernels encapsulate language runtimes so that they can be interfaced when called from a notebook.
I think it can be very suitable eg when you are preparing a presentation, report, a paper or a repeatable analysis/process. Especially - as with most of those examples - if you want to interleave narrative and code/results. It is less suitable for doing exploratory analysis, for any kind of interactivity, for connecting to remote sessions (it's possible but clunky), for showing a chart that you can zoom into. For displaying a table with 10,000 rows, for displaying a large plot. Or multiple plots. For being able to zoom into a plot. It's not great at integrating with LSP and similar tools. Could be better at managing code blocks, though one could write additional helpers and bindings fairly easily.
And, finally, it is quite a pain in the ass to have the code stored in a document rather than as code since it does tie me down even to my beloved emacs. I develop most of my code as library code which I can directly import/run. During the development it is still helpful to see the results of running defined functions and to be able to interact with the dataset. I currently do have a solution and a workflow but the tools aren't ideal for it.
I want to be able to have my codebase run inside a docker container, to be able to `git pull` to update it on the remote without involving emacs on the remote end, without having duplicate versions of the code in the repo (ie one in the org document and one tangled) for me to manage, and I also want to be able to make a small change in vim and push it back without involving emacs.
> Juypter has an interface and API built in. What Zasper is the reimplementation of the juypter protocol. You can see this at [1].Juypter kernels are very different from Mathematica notebooks. Mathematica notebooks aren't related to juypter.
Thank you for the explanation. Up until this very moment I thought mathematica and jupyter were exactly the same. Just to make sure, when you say they are very different and unrelated, do you mean like matlab is unrelated to numpy+ecosystem, like how Honda cars are unrelated to Ford cars, or like how pandas is unrelated to excel?
It helps when you are actually familiar with the technologies before making any - especially contradictory - claims. Mathematica for all it's faults - primary of them being proprietary - has a quite finely polished product and jupyter notebook interface draws heavily from it. I'f I'm not mistaken it is the OG notebook interface, though I'm not making a strong claim here.
Mathematica also has an interface and an API built in. You can run mathematica (or is it "wolfram" these days?) code on a headless kernel, you can connect your notebook frontend to a remote kernel, and you can make your own completely independent UI using the APIs in the language. Alternatively, you can connect the notebook interface to a kernel in another language using J/Link MathLink or C/++Link APIs. Or you can embed the mathematica kernel into jupyter - an existing project/duct and run mathematica code in jupyter/Zasper/whatever. Or run it in their webui for the past .. decade at this point?
I'll give you the benefit of doubt and not assume that you are a trollbot but I sincerely don't understand your need to offer "first page of google" suggestions when you clearly don't use the technologies you're commenting on.
I don't think that's fair. Rather, IPython, and later Jupyter, explicitly (successfully) sought to create a Mathematica-like notebook experience for Python.
To refresh my memory I just started it and tried using it with a julia kernel on a remote jupyter. To start, it wouldn't connect to https endpoint. Maybe because it's signed by a private CA? idk, but the mac trusts it for eg the browser and curl. Well anyway, let's forward the http port and try connecting to localhost.
Great, that works, and I'm offered some uuid as a "choice of kernel to connect to". I don't recall having one running before I connected, so it probably was started for me. How do I name it? Ah, there's `jupyter-server-kernel-list-name-kernel`, and now I'm recalling that whatever you name it as, will disappear if you quit emacs. Let's try.
Meanwhile, I import PlotlyJS and try to create a plot. I get complaints about WebIO (julia package that facilitates interaction with browser) like I do in jupyter (the package is old and doesn't work with current jupyter), except in the browser only the back communication (browser->kernel) is broken, for interactivity. Showing plots works. Anyway, PlotlyJS displays nothing. `Plots`, which renders to a png, somehow produces the axes but not data. Eventually I get PlotlyJS to display an image using explicit image mime types.
Still no interactivity - I would need node for that, to compile widget support for whatever reason - but it does display. I should retest widget support. Sending code to repl works, although at this point I'm used to seeing an overlay over variables that get set.
Ok. Close emacs, restart, go to session list (`jupyter-server-list-kernels`). Name has been cleared. I can reassociate the buffer to the kernel, but, if I have two, open kernels, how do I tell which buffers is associates with which kernel?
Overall it mostly works although there's room for polish. However, interactivity or any kind of bidirectional communication remains somewhat difficult.
This is not meant as criticism, just perspective. It's a classic development sequence:
* A team creates a powerful, small-footprint, REPL environment.
* Over time people ask for more features and languages.
* The developers agree to all such requests.
* The environment inevitably becomes more difficult to install and maintain.
* A new development team offers a smaller, more efficient REPL environment.
* Over time ... wash, rinse, repeat.
This BTW is what happened to Sage, which grew over time and was eventually replaced by IPython, then Jupyter, then JupyterLab. Sage is now an installable JupyterLab kernel, as is Go, among many other languages, in an environment that's increasingly difficult to install and maintain.Hey -- just saying. Zasper might be clearly better and replace everything, in a process that mimics biological evolution. Can't leave without an XKCD reference: https://xkcd.com/927/
Again, not meant as criticism -- not at all.
There is no such thing. There are Jupyter kernels. JupyterLab is just one of many UIs that speak the Jupyter protocol. Other examples include the original Jupyter notebook editor, VSCode Jupyter extension, and now Zasper.
I'm pretty sure Sage was always intended as a project that integrates the world, never "small footprint".
> There is no such thing.
A Web search reveals that the alternate term "Jupyter kernel," appears equally often. The terms are interchangeable.
> I'm pretty sure Sage was always intended as a project that integrates the world, never "small footprint".
A large install became true eventually, but it began as a small Python-based install, about 120 KB. Then people asked for extensions, and William Stein said "Yes".
Yes, that was its goal, when Python wasn't as evolved as it is now. More recently I've come to rely on Python libraries like sympy for symbolic processing. For these kinds of results Sage relies on a rather old environment called Maxima, and I think current sympy does pretty much everything that Maxima does. And as time passes Python libraries are beginning to provide some of the numerical processing originally provided by MATLAB (but more slowly).
> It currently isn't well supported in Windows which is what you might have meant by the complexity.
Actually I was thinking of JupyterLab itself. As time passes I find it more difficult to get it installed without library conflicts. But that can be said about many Python-based projects in modern times, which is why a Python virtual environment is becoming more the rule than the exception, in particular with GPU-reliant chatbots and imaging apps, to avoid the seemingly inevitable library version conflicts.
If memory serves, Sage now installs on Windows by creating a Linux VM to support it.
If I'm loading files from S3, I'm being charge for it. If Marimo re-executes this cell to maintain the state, it will charge me double. I don't need that. I'm able to organize my code, and know how it is being run.
With proper structuring of the blocks, Marimo will not re-execute the cell. Also memoization in script based workflows is still somewhat clunky on Python even with something like Snakemake.
I do find Marimo's approach, "global" variables tracked between blocks, less than ideal, but it's the best out there.
That sounds like a solid improvement. I’m going to give this a test drive. I feel like modularity is one of the hardest aspects of Jupyter notebooks in a team environment.
I’d be interested to hear if anyone has cracked a workflow with notebooks for larger teams. Notebooks are easy for solo or very small teams, and the literate programming style benefits still apply in larger teams but there’s a lot of friction: “hey just %run this shared notebook with a bunch of useful utilities in it - oops yeah it tries to write some files because of some stuff unrelated to your use case in there (that’s essential to my use case)”
My current best that I know of is to keep “calculation” (pure) code in a .py and just the “action“ (side-effectful) code in the notebook. Then as far as physically possible, keep the data outside of notebook (usually a database or csv’s). That helps avoid the main time sink pitfalls (resolving git conflicts, versioning, testing etc) but it doesn’t solve for example tooling you might want to run - maybe mypy against that action code - sure you can use nbqa but… interested to learn better approaches.
The literate programming aspect is very nice and I wish it was explored more.
would be cool if marimo could "unroll" the compute graph into a standalone python script that doesn't need the marimo library
Pure-python also helps to work with existing tools out of the box: formatting, linting, pytest, importing notebooks as modules, composition, PEP 723 inline metadata
I rarely use notebooks directly anymore unless I require the output to be stored. Do most everything in VSCode with interactive .py files. Gets you the same notebook-y experience + all of the Python tooling.
You are not showcasing anything, but looping low resolution screenshots with special effects.
As an example I love jupyterlab's "open console for notebook" but can't find a way of sending copied text to it, or switching focus with a keyboard shortcut
It's a big reason I can't do vscode Jupiters implementation