Skip to main content

2 posts tagged with "ai"

View All Tags

· 3 min read

How to effectively work with Jupyter Notebooks using LLMS

I discovered some extremely effective ways to work with Jupyter Notebooks with LLM assistance this week.

Things I found massively helpful:

  1. Cell tagging
  2. Agentic cell execution

1. Cell Tagging

Every Jupyter Notebook is just a JSON file with a specific structure. Each cell in a Jupyter Notebook is just an entry in the cells list. One of the properties a cell has is metadata. Jupyter Notebook clients, like the built-in notebook viewer in VSCode / Cursor, have a feature which leverage this metadata property to add tags.

In VSCode, we can do this by opening the command pallete and running the "Add Cell Tag" command.

The JSON for a cell with source code: print("hello world") tagged with example-tag looks like this:

{
"cell_type": "code",
"execution_count": null,
"id": "16bb7324",
"metadata": {
"tags": [
"example-tag"
]
},
"outputs": [],
"source": [
"print(\"hello world\")"
]
}

Using tags allows us to reference a specific cell very easily, in a way both I and my LLM understand.

For example I can prompt my LLM with:

In @notebooks/test.ipynb, in the cell tagged example-tag, change the print statement to all caps

and it would very easily make the change without reading irrelevant cells and bloating context.

In this case, I could just as easily reference the cell by it's index like:

In @notebooks/test.ipynb, In the first cell, change the print statement to all caps

but as a notebook grows to dozens of cells, this becomes untenable.

Using cell tags scales.

2. Agentic Cell Execution

Here's where my mind was really blown – running cells in agentic loops.

Cell execution with nbconvert

Say I wanted my LLM not only to make the change to print "hello world" in all caps, I want it to verify that after making the change and running the cell, the output of that cell indeed prints "HELLO WORLD".

Again referencing the cell tag, I can ask my LLM:

"In @notebooks/test.ipynb in the cell tagged example-tag, change the print statement to all caps, run the cell, and verify that the printed text is in all caps"

In this case, your agent should use a tool maintained by the Jupyter Development Team called nbconvert, which, among other things, enables Jupyter notebook execution from the command line.

For our test.ipynb notebook, the command will look like:

jupyter nbconvert --to notebook --execute test.ipynb

This will populate the value of the "outputs" key of your cells. The agent can then look up the cell in question by its tag, read the "outputs" value, and verify the output using its great natural language ability or by writing an ad-hoc script.

Example agent session

See an example session here walking through what was described above. The agent makes a code change by modifying the "source" value of a tagged cell, uses jupyter nbconvert to execute the notebook, and reads the "outputs" value of the cell to check that our all caps string was printed to stdout.

· 4 min read

Let me get this off my chest right off the bat: Keras is all you need for 90% of neural network applications. Before jumping in, you should understand what is going on with neural networks(watch 3Blue1Brown's videos).

Ideally, you can understand all the math behind them and code a basic one from scratch (read Nielsen's Neural Networks and Deep Learning).

But for most cases, even understanding is unnecessary with all the pretrained models frameworks ship with nowadays.

But, assuming you're not a n00b and you need to do more beyond just classifying 2D images or recognizing characters, here are the Deep Learning/ Neural Network frameworks can best fit your use case.

The Only 4 Deep Learning Frameworks That Matter