Skip to main content

· 3 min read

How to effectively work with Jupyter Notebooks using LLMS

I discovered some extremely effective ways to work with Jupyter Notebooks with LLM assistance this week.

Things I found massively helpful:

  1. Cell tagging
  2. Agentic cell execution

1. Cell Tagging

Every Jupyter Notebook is just a JSON file with a specific structure. Each cell in a Jupyter Notebook is just an entry in the cells list. One of the properties a cell has is metadata. Jupyter Notebook clients, like the built-in notebook viewer in VSCode / Cursor, have a feature which leverage this metadata property to add tags.

In VSCode, we can do this by opening the command pallete and running the "Add Cell Tag" command.

The JSON for a cell with source code: print("hello world") tagged with example-tag looks like this:

{
"cell_type": "code",
"execution_count": null,
"id": "16bb7324",
"metadata": {
"tags": [
"example-tag"
]
},
"outputs": [],
"source": [
"print(\"hello world\")"
]
}

Using tags allows us to reference a specific cell very easily, in a way both I and my LLM understand.

For example I can prompt my LLM with:

In @notebooks/test.ipynb, in the cell tagged example-tag, change the print statement to all caps

and it would very easily make the change without reading irrelevant cells and bloating context.

In this case, I could just as easily reference the cell by it's index like:

In @notebooks/test.ipynb, In the first cell, change the print statement to all caps

but as a notebook grows to dozens of cells, this becomes untenable.

Using cell tags scales.

2. Agentic Cell Execution

Here's where my mind was really blown – running cells in agentic loops.

Cell execution with nbconvert

Say I wanted my LLM not only to make the change to print "hello world" in all caps, I want it to verify that after making the change and running the cell, the output of that cell indeed prints "HELLO WORLD".

Again referencing the cell tag, I can ask my LLM:

"In @notebooks/test.ipynb in the cell tagged example-tag, change the print statement to all caps, run the cell, and verify that the printed text is in all caps"

In this case, your agent should use a tool maintained by the Jupyter Development Team called nbconvert, which, among other things, enables Jupyter notebook execution from the command line.

For our test.ipynb notebook, the command will look like:

jupyter nbconvert --to notebook --execute test.ipynb

This will populate the value of the "outputs" key of your cells. The agent can then look up the cell in question by its tag, read the "outputs" value, and verify the output using its great natural language ability or by writing an ad-hoc script.

Example agent session

See an example session here walking through what was described above. The agent makes a code change by modifying the "source" value of a tagged cell, uses jupyter nbconvert to execute the notebook, and reads the "outputs" value of the cell to check that our all caps string was printed to stdout.

· 2 min read

After getting worktree-pilled by Primeagen, I've consistently been using worktrees for the past two years.

But I recently learned git clone --bare is suboptimal – there's a better way to clone a repo for worktree usage that doesn't require creating extra branches just for the worktree workflow.

Here's the fastest, cleanest way I've found to clone a repo for usage with worktress. Works every time, without needing to run git gc all the time to clean up storage bloat.

I use it in zsh, but it should work in bash. Fish bros, sorry I have no clue.

function git-empty-clone() {
# Useful for cloning a repo suitable for use with git worktrees
# Taken from this stack overflow answer:
# https://stackoverflow.com/questions/54367011/git-bare-repositories-worktrees-and-tracking-branches
if [ $# -lt 2 ]; then
echo "Usage: git-empty-clone <repo-url> <target-dir>"
return 1
fi

local repo_url="$1"
local target_dir="$2"

git clone "$repo_url" "$target_dir" || return 1
cd "$target_dir" || return 1

# Create an empty root commit
local empty_tree
empty_tree=$(git hash-object -t tree /dev/null)
local empty_commit
empty_commit=$(echo | git commit-tree "$empty_tree")

# Checkout the empty commit in detached HEAD
git checkout "$empty_commit"
echo "Use git worktree add <branch_name> to start using worktrees."
}

Example usage to clone my vim configs and add a worktree for the dev branch:

git-empty-clone git@github.com:whoislewys/vim_runtime.git vim_runtime.git
git worktree add dev

Read the thread in the function's comment to learn why it works and uncover the mystery of the git empty tree: https://stackoverflow.com/questions/54367011/git-bare-repositories-worktrees-and-tracking-branches

· 4 min read

When I started working on crypto projects, I chose to work under a pseudonym. Here's why, and why I ultimately chose to reveal my identity.

Why I went anon:

Robbery Risk

The less they know, the better.

Targeted robberies, SIM swaps, and harassment happen. Reducing links back to you reduce your risk.

For the culture

The application developers I looked up to the most in the space were anons: TzTokChad, 0xSami, Banteg, Zeus, Fully... I could go on.

Their market knowledge and technical chops spoke for themselves. There was no need to lean on the reputation of their "previous life".

I wanted to be like that.

I wanted to become so good I couldn't be ignored, regardless of my name, race, or history.

Also, being anon just looked cool imo "¯\_(ツ)_/¯"

Career Risk

It felt overly risky betting all of my professional reputation on such a niche field.

Using a pseudonym would allow me to build legitimacy in the space without associating my real identity with the "crypto dev" label - one widely associated with vaporware and scams.

Meanwhile, I could maintain my existing reputation by keeping up my traditional software skills and relationships.

In summary, I liked the optionality of pseudonymity. If crypto became more socially acceptable, I became valuable enough in the niche, or I got tired of the mask, I could always exercise my option to dox.

And that's what I chose to do.


Why I doxed:

In the end, the opportunity cost of being anon bothered me. Also, I realized I was overestimating my personal robbery risk.

Opportunity Cost of Accountability

You have to build credibility, and you have to do it under your own name as much as possible.

- Naval Ravikant

While being a high-integrity anon is respectable and profitable, it can have a high opportunity cost.

If there is a large overlap between the goals of your pseudonym and your true identity, you're missing out on the compounding returns of accountability. And if you know anything about managing money, you know compounding returns are the juiciest.

Being anon is also a lot of upkeep. Keeping accounts straight across multiple SIM cards, keys, and other opsec drudgery started cutting into my personal and professional life.

That said, pseudonymity makes sense in a lot of cases.

If you can grow or have fun by expressing yourself in a way that clashes with societal norms, try it as an anon! Making a throwaway account to ask Reddit bondage questions as a Fortune 500 CEO is a wise choice. Calling yourself BasedBeffJezos and inspiring a wave of techno-optimism is also a great cause.

Or, if pseudonymity is your only hope for accomplishing a net good for humanity that the authorities would consider "subversive", tighten up your opsec and get to it (RIP TornadoCash).

But for my case, where my main quest is to accumulate wealth, using a pseudonym was doing more harm than good.

Overestimating my Robbery Risk

Another reason I doxed is because I overestimated my personal robbery risk.

The things that put me at ease are my travel frequency and high personal security standards.

I travel frequently enough that pinning my location down remains tricky. Obviously, not everyone has this luxury. What else can you do?

Well, don't walk around with your trezor in your pocket, its code written on your forehead while live streaming your location.

On top of that, you can keep the expected returns to a violent attack on you low by making your assets difficult to retrieve. Copytrade Vitalik - store crypto assets behind a combination of hardware wallets and multisigs. Keep more traditional accounts safe with multi-factor auth methods like Yubikey and OTP codes. Don't use text messages for 2FA, or you'll get rekt by a low-tier SIM swap attack.

Fin

May Terrance Andrew's Research Center live on in our hearts and git histories.

- 0xTARC Luis

· 3 min read

Neovim, solc, and coc.nvim FTW

Having a working knowledge of Solidity for a crypto developer in the EVM ecosystem is critical. You can be much more productive when you have the ability to dive into the source yourself. And diving into Solidity source code becomes much easier when you have deep integration in your dev environment of choice. So of course as a neck-bearded Neovim user, I wanted to share how I set this up!

· 3 min read

As a web3 developer for a full year, I've read a TON of open source dApp front-end code. One thing that always concerned me was the widespread usage of Next.js.

Next.js is a server side rendering (SSR) framework for React. There's a few problems with this:

Problems With Next.js for dApp Development

1. Next.js is too complex for nearly every dApp.

There’s no need for the additional complexity of Next in dApps.

Do you know what Next.js consists of? It’s 38 MB, ser. I can damn near guarantee you don’t need all of that for your dApp.

· 4 min read

Crypto Wallet UX Sucks

It’s no mystery that interacting with blockchains is still hard compared to the buttery-slickness of today’s FinTech.

Think about the difference between buying Minecraft on the App Store with PayPal, and buying an NFT on OpenSea with MetaMask.

This is the gap we need to fill.

· 7 min read

The Pain of Calling Smart Contracts from the Frontend

Have you ever

  • Been confused about what address a contract lives at?
  • Used an out of date ABI?
  • Wondered wtf Solidity’s bytes32 should be in TypeScript?
  • Got confused context switching between Solidity and TypeScript to make sure you’re calling a function correctly?

Me too, anon.

What if we could have ONE place that keeps track of all of our addresses, ABIs, AND ALSO makes interacting with our contracts 10x easier? We can with TypeChain! Let’s see how in 5 simple(ish) steps.

· 2 min read

The Problem

So you need your users to choose from a range of values while giving them a hint about how risky their choice is.

Maybe you reach for Material UI, install 6MB of potentially dangerous dependencies for a single, hard to customize component.

Maybe you're bought-in to the headless component wave and are in the middle of migrating to your own design system (my case!).

Or maybe you just want a great slider with minimal dependencies.

No matter your situation, this one's for you, anon.