"Do not attempt to make sense of what the machine spirit produces"

By Hinrik Hafsteinsson

Published or Updated on

Getting an AI to do what you want

I recently watched Kyberwerk’s video about using ChatGPT for 3D printing. He asks the model to generate OpenSCAD code, to create 3D models that can be applied in a 3D printer. I’ll preface by explaining that I have never used a 3D printer before or seen a video by Kyberwerk before either.

I really like how this video underlines both the capabilities and the inabilities of the current model.

His first experiment went well. Out of the blue, ChatGPT generated working OpenSCAD code. With a bit of tweaking through the chat prompts, he was able to get the exact output he was looking for. This process should be familiar to most people that have tried ChatGPT these last few weeks.

His second experiment, where he asks for “a christmas tree topper” goes wrong. The model returns a detailed example, but the code is non functioning. We know that ChatGPT does whatever it wants in these cases and code output isn’t specifically checked for errors. The interesting part is that, when trying to fix this broken output, Kyberwerk writes a chat prompt which sounds like this:

“That is not proper OpenSCAD syntax. Try again.”

The model tries to “fix” the syntax errors, but to the user’s surprise, still returns broken code. The model has no real idea what errors the user was referring to. It can’t check it’s own code, in this case.

This must be because OpenSCAD is such a poorly represented language in the training data. The model can generate working code, but it is easy for it to fill the "blanks" in its knowledge with "knowledge" about something else. I imagine this as a kind of learned pseudo-code that the model uses to fill in the blanks, although this is just so it's easier for me to comprehend.

Kyberwerk ends up asking the model to start again, as this is still not working OpenSCAD code. The model however keeps on returning broken output. It isn’t until the user resets the thread that the output changes.

This is my big take away from Kyberwerk’s process. The by the time the model gives you a single piece of output (which requires a single piece of input) the conversation as a whole as been put on rails. As I understand it, this is because ChatGPT is just GPT-3 in a nice wrapper; the detailed prompt engineering used in GPT-3 magic is replaced by the simulated conversation (the “Chat” in ChatGPT). The model looks at all the prompts (to an extent at least) from the user, and also its own contribution in the current thread, and bases its next output on all of this. The conversation itself has replaced the prompting. If you want a fresh start, you restart the thread.

Self replecating baby steps

My other take away from Kyberwerk’s video is the implication it provides for self replication. This is of course obvious from the video’s title: “ChatGPT & 3D Printing: Self-Replicating AI Machines!”. It is an experiment in using natural language to describe an object and have a language model, which only knows language, in a sense, convert the information in the description into another format, in this case OpenSCAD code and accompanying model.

ChatGPT is of course not trained or optimized for 3D modelling applications. This is however a test for its claim to be a “general knowledge” model. The great thing is that, yes, you could say that GPT-3 has some general knowledge, but it isn’t really that great at the applications of it.

This is a really similar thought to my last article, where, instead of 3D modelling, I discussed using ChatGPT to (re)create art. Unlike that kind of experiment, where you can compare ChatGPT to various image-creating AIs, I myself have not seen an AI model specifically applied to 3D modelling before, although I assume it is a well known idea.

This means that this youtube video was my first experience in seeing an AI (lets use that term here, instead of a language model, which it of course is) trying to create something in the physical world, even though it doesn’t realise it itself. Given a few more abstractions, this implies a door into a future we’ve seen coming for a long time: Fully autonomous self replecation.

Do not attempt to make sense of what the machine spirit produces

This quote, which is taken from Kyberwerk’s video, is my favorite way of wording a thought I’ve had for a long time. The machine spirit here is the model being used, but it might as well be applied to any application of an algorithm in a service. From the viewpoint of the general user, who is sometimes completely detached from the inner workings of the software he is using, the inner workings of the software being used might as well be magic (🤔 sounds familiar…).

In the case of ChatGPT, this means that trying to make sense of the reasoning behind the output of the model wouldn’t help the user at all. It just does what it does, and you, as the user, take it as you can. The output can be amazing, but it can also be incredibly stupid.

Hello!
PKM as sophisticated procrastination
A perfect Mona Lisa