Welcome to the first episode of Digital Pulse.
If you’re new here, Digital Pulse is a fortnightly news cast where we highlight some of the interesting things that have happened recently in the tech sector.
In our first, longer running than expected, edition of Digital Pulse we discuss:
The Cost of Observability!
An eagle-eyed analyst spotted a $65 billion line item in the Datadog annual report, who could this be? Twitter went wild and it turns out that Coinbase did indeed spend a decent chunk of cash on observability. We talk about what happened, how the story unfolded and how to justify such a number … as well as who gets the blame when things go wrong
We know that DHH thinks the cloud is expensive. He’s also critical of serverless and perhaps rightly so. Well, it turns out that the Amazon Prime Video crew may have similar feelings! Again Twitter exploded, as they announced reducing costs by 90%. Serverless, Microservices or Monoliths what do we do next?
Rival Streaming schools
Kafka is dominant in the event sourcing game. Redpanda are looking to each some of Confluent’s lunch, benchmarking well with robust data guarantees there’s been a bit of hubbub - Confluent are in the process of creating a blog series that query some of the Redpanda claims. Alexander Gallego put out a wonderful offer to LinkedIn: “Bring me your confluent bill I will cut by 50% or I will give you money.” We discuss why you should care and why Redpanda is very much a viable alternative to Kafka, Confluent or otherwise.
Rise of the LLMs
Mosaic announced mpt-7b, open-source commercially usable LLMS, trained for $200 no, wait! $200k all in 9.5 days and zero human intervention. The pace of LLMs is wonderful and terrifying, it’s great to see the open-source community provide OpenAI alternatives.
Speaking of which things may get cheaper still, QLoRA presents yet another memory reducing mechanism to finetune LLMs - utilise the 48BG CPU you have laying around!
Cheaper and better LLMs that can utilised by anyone for a great many purposes decentralises power and means that everyone can benefit from the rise of LLMs. Run on-prem or your own cloud, keep your data to yourself and reduce risk around leaking data.
Not that everything AI means LLMs, but here we go again, this week Gitlab announced that Gitlab 16 would see a launch event on the 22nd of June. Featuring a suite of new functionality including Code Suggestions, reviewers and comment summaries - we discuss why you should care and what this means for developers going forward.
If the machines don’t take over we will see you next time for FinOps with a regular Everything Delivery podcast.
You can find this podcast on:
Apple podcast: https://podcasts.apple.com/it/podcast/everything-delivery/id1680879029?l=en