4 Comments
Apr 1, 2023·edited Apr 1, 2023Liked by Jon Stokes

I came across the ReAct pattern:

https://til.simonwillison.net/llms/python-react-pattern

https://interconnected.org/home/2023/03/16/singularity

Copied and pasted content from this article to GPT-4 to explain it, had it play with me (in the browser, where I'm the intermediary), and now I'm making it write a Javascript that will automate my intermediary job (doing basic math).

Expand full comment

Can't get it automated, the send button remains greyed out until I do a legit keypress. But it works up until that point.

Maybe I'm too n00b as a JavaScript h4xx0r.

Expand full comment

ReAct is incredibly powerful. It not only extends the capabilities of any LLM in areas where they generally suck (being static, math), it also opens up the Black Box, you can see the way it reasons, as it the loop goes on.

I asked ChatGPT to check what's in my fridge by giving it access to my personal android, that can follow simple commands:

https://www.magyar.blog/i/112089741/playing-react-with-chatgpt-web

ReAct is a huge leap forward in explainability. Now it's possible to break down the reasoning of an LLM for debugging, all exposed in a human-readable form.

Expand full comment
Apr 1, 2023Liked by Jon Stokes

The piece on pirate wires was oddly chilling. Up until earlier today I just laughed about AI X-risk, but there's something about how you describe Altman's caution that gives me an actually bad feeling for the first time. I'm hoping this is just an artifact of the piece being well written.

Expand full comment