Introducing Neurometric: Benchmarking and Optimization For Heterogenous AI Accelerator Environments
A cool new project
Happy Sunday and welcome to Investing in AI. Today I want to talk about a new project that I co-founded, that blends my love of AI hardware with my belief that it is about to breakthrough the noise and really start to get adopted. There have been some interesting things going on in the hardware space lately. First Perpexity and Cerebras announced a collaboration on search with blazing fast performance. Then Murat Onen from Eva wrote this excellent piece stating that hardware is the only moat in AI.
I’m a big believer that, while GPUs are around for the long term, we are about to see an explosion in adoption of other types of AI hardware. That’s why some friends and I with hardware backgrounds started Neurometric.
The problem with the AI hardware space is that figuring out what you need in a rapidly changing world is difficult. And all the major chips: Sambanova, Cerebras, Groq, Rain, Eva, Sagence, Blaze, Tenstorrent, Inferentia and dozens more, have made different tradeoffs for different use cases. Choosing the right chip is difficult. Some run CNNs better than LLMs. Some are best for diffusion models. They have different amounts of memory, different price points, and different levels of power consumption.
As the world moves to heterogeneous AI systems where you run some models on GPUs and some elsewhere, what and where is the “elsewhere?” It can be difficult to figure out which approach is right. So we decided to build a company that can help. Neurometric has started advising both data center clients who are building out heterogenous clouds, and large enterprises who want some independent benchmarking reports on which pieces of hardware are best for them.
It’s an exciting project to build all of this non-GPU expertise in one place. And these new chips are super cool because they use all kinds of novel compute architectures that have been around for decades but had limited uses until AI came along. We can take a few more design partners for our benchmarking and policy software that manages these heterogeneous compute environments so if you are considering an expansion beyond GPUs, please reach out.
And of course expect lots of insights here about what we are learning as we go deeper in the AI hardware world. Thanks for reading.