Nearly a year and a half ago, I published a major forecast on artificial intelligence [ru]. Read it if you haven't already — it's still looking good.
Recently, I've decided to expand on the prognosis, but a sizeable comprehensive post isn't coming together, so there will be a series of smaller notes.
I'll start with industry transparency: the current AI movement has several impressive aspects that I'd like to discuss.
I found an excellent in-depth analysis of Goodhart's Law (when a measure becomes a target, it ceases to be a good measure).
Cedric Chin breaks down the law into its components and provides examples from Amazon's practice on how they work with them.
In brief: everything is not so straightforward and not so bad.
When people are under pressure from a target metric, they have three behavioral strategies:
For example, if you have a factory producing things and a production plan for them.
Then possible strategies for your employees:
Therefore, the manager's goals:
The original post contains interesting examples of Amazon's adaptation to these principles.
For example, they switched from optimizing output metrics to optimizing input metrics through the evolutionary refinement of heuristics about them. Because it is more difficult to falsify input metrics, and their impact on the output can be empirically evaluated.
Exaggerating, instead of optimizing the "number of sales" metric, it may be better to optimize the "number of cold calls", "number of ads", etc. by iteratively refining the formulations based on business data.
As an example, here is the evolution of the metric for one of Amazon's teams:
- number of detail pages, which we refined to
- number of detail page views (you don’t get credit for a new detail page if customers don’t view it), which then became
- the percentage of detail page views where the products were in stock (you don’t get credit if you add items but can’t keep them in stock), which was ultimately finalized as
- the percentage of detail page views where the products were in stock and immediately ready for two-day shipping, which ended up being called Fast Track In Stock.
For details, I recommend visiting the original post.
Once in 2-3 years, I start a new project and have to "relearn" how this time to collect and visualize metrics. It is not a single technological thing that changes over time, but it is guaranteed to change.
I sent metrics via UDP [ru] to Graphite (in 2024, a post from 2015 looks funny), used SaaS solutions like Datadog and New Relic, aggregated metrics in the application for Prometheus, and wrote metrics as logs for AWS CloudWatch.
And there were always nuances:
Therefore, there is no single ideal way to collect metrics. Moreover, the variety of approaches, together with the rapid evolution of the entire field, has produced a vast number of open-source bricks that can be used to build any Frankenstein.
So, when the time came to implement metrics in Feeds Fun, I spent a few days updating my knowledge and organizing my thoughts.
In this essay, I will share some of my thoughts on the metrics as a whole and the solution I have chosen for myself. Not in the form of a tutorial but in the form of theses on topics I am passionate about.
Nearly a month ago, I decided to add Gemini support to Feeds Fun and did some research on top LLM frameworks — I didn't want to write my own bicycle.
As a result, I found an embarrassing bug (in my opinion, of course) in the integration with Gemini in LLamaIndex. Judging by the code, it is also present in Haystack and in the plugin for LangChain. And the root of the problem is in the Google SDK for Python.
When initializing a new client for Gemini, the framework code overwrites/replaces API keys in all clients created before. Because the API key, by default, is stored in a singleton.
It is death-like, if you have a multi-tenant application, and unnoticeable in all other cases. Multi-tenant means that your application works with multiple users.
For example, in my case, in Feeds Fun, a user can enter their API key to improve the quality of the service. Imagine what a funny situation could happen: a user entered an API key to process their news but spent tokens (paid for) for all service users.
I reported this bug only in LLamaIndex as a security issue, and there has been no reaction for 3 weeks. I'm too lazy to reproduce and report for Haystack and LangChain. So this is your chance to report a bug to a top repository. All the info will be below, reproducing is not difficult.
This error is notable for many reasons:
Ultimately, I gave up on these frameworks and implemented my own client over HTTP API.
My conclusion from this mess is: you can't trust the code under the hood of modern LLM frameworks. You need to double-check and proofread it. Just because they state that they are "production-ready" doesn't mean they are really production-ready.
Let me tell you more about the bug.
Recently, I unexpectedly encountered a justice system in the USA.
What conclusions can be drawn from this: