“Make an SVG of a clock icon” is likely to work. “Make an SVG of a playground swingset with the sun setting” is not.
https://claude.site/artifacts/0f696bf8-399d-42c3-93c0-296493...
I'm obviously biased as a former "business user" writing a document authoring software!
I’d also like to see this in music generation. Tools like Suno are cool but I would much rather have something that generates MIDIs and instrument configurations instead.
Maybe this is a good lesson for generative tools. It’s possible to generate something that’s a good starting point. But what people actually want is long tail, so including the capability of precision modification is the difference between a canned demo and a powerful tool.
> Code coming soon
The examples are quite nice but I have no idea how reproducible they are.
Sounds like you're looking for something like https://www.aiva.ai
I guess I’m hoping for something better. It’s also closed source, the web ui doesn’t have editing functionality, and the output is pretty disjointed. Maybe if I messed around with it enough the result would be decent.
Seriously though, this is amazing, I'm glad to see this tackled directly.
Also, I just learned from this thread that Claude is apparently usable for generating SVGs (unlike e.g. GPT-4 when I tested for it some months ago), so I'll play with that while waiting for NeuralSVG to become available.
I think the utility of generating vectors is far, far greater than all the raster generation that's been a big focus thus far (DALL-E, Midjourney, etc). Those efforts have been incredibly impressive, of course, but raster outputs are so much more difficult to work with. You're forced to "upscale" or "inpaint" the rasters using subsequent generative AI calls to actually iterate towards something useful.
By contrast, generated vectors are inherently scalable and easy to edit. These outputs in particular seem to be low-complexity, with each shape composed of as few points as possible. This is a boon for "human-in-the-loop" editing experiences.
When it comes to generative visuals, creating simplified representations is much harder (and, IMO, more valuable) than creating highly intricate, messy representations.
I'm not sure what else to add, except that these are exactly the thoughts I think, and it used to feel lonely ;)
https://www.recraft.ai/ai-image-vectorizer
The quality does look quite amazing at first glance. How are the vectors to work with? Can you just open them in illustrator and start editing?
(The editing quality of the vectorized ones were not great, but it is hard to see how they could be good given their raster-style appearance. I can't speak to the editing quality of the native-generated ones, either in the old obsolete Recraft models or the newer ones, because the old ones were too ugly to want to use, and I haven't done much with the new one yet.)
I'm working on a sparse audio codec that's mostly focused on "natural" sounds at the moment, and uses some (very roughly) physics-based assumptions to promote a sparse representation.
https://blog.cochlea.xyz/sparse-interpretable-audio-codec-pa...
Here is an ASCII art representation of a hopping rabbit:
```
(\(\
( -.-)
o_(")(")
```
This is a simple representation of a rabbit with its ears up and in a hopping stance. Let me know if you'd like me to adjust it!
https://old.reddit.com/r/identifythisfont/comments/ytd25m/wh...
I has to convert a bitmask to svg and was wishing to skip the intermediatary step so looked around for papers about segmentation models outputting svg and found this one https://arxiv.org/abs/2311.05276
Fun example: https://gist.github.com/scosman/701275e737331aaab6a2acf74a52...
It’s quite amazing the progress we are seeing in AI and it will keep getting better which is somewhat terrifying.
Tried various models and they got it hopelessly wrong. Claude does an okay job at "Generate an SVG of a pelican riding a bicycle"
Once you have your IR, modify and render. Once you have your render, apply a final coat of AI pixie dust.
Maybe generative models will get so powerful that fine-grained control can be achieved through natural language. But until then, this method would have the advantages of controllability, interoperability with existing tools (like Intellisense, image editors), and probably smaller, cheaper models that don’t have to accommodate high dimensional pixel space.
It would be interesting to see a similar approach that incrementally works from simpler ( fewer curves ) to more complex representations.
That way one could probably apply RLHF along the trajectory too.
glad someone actually did it! great work!