Memory Leak — #35

Astasia Myers
Memory Leak
Published in
5 min readMar 15, 2024

--

VC Astasia Myers’ perspectives on AI, cloud infrastructure, developer tools, open source, and security. Sign up here.

🚀 Products

Introducing Devin, the First AI Software Engineer

This week Cognition Labs introduced Devin, the world’s first fully autonomous AI software engineer. With advances in long-term reasoning and planning, Devin can plan and execute complex engineering tasks requiring thousands of decisions. Devin can recall relevant context at every step, learn over time, and fix mistakes. Devin correctly resolves 13.86% of the issues end-to-end, far exceeding the previous state-of-the-art of 1.96%. Even when given the exact files to edit, the best previous models can only resolve 4.80% of issues.

Why does this matter? This week Devin felt like a “break the internet” moment. Social media was engrossed with the demo videos and many people were skeptical that it was just demoware. The excitement around AI agents is not unwarranted. It is pretty impressive to see agents take on complex tasks with multi-step reasoning. The big unlock will be providing this capability not only to engineers but all builders.

Cognition-labs.com

Infrastructure as Code Development With Amazon CodeWhisperer

At re:Invent in 2023, AWS announced Infrastructure as Code (IaC) support for Amazon CodeWhisperer. CodeWhisperer is an AI-powered productivity tool for the IDE and command line that helps software developers to quickly and efficiently create cloud applications to run on AWS. Languages currently supported for IaC are YAML and JSON for AWS CloudFormation, Typescript and Python for AWS CDK, and HCL for HashiCorp Terraform. In addition to providing code recommendations in the editor, CodeWhisperer also features a security scanner that alerts the developer to potentially insecure infrastructure code, and offers suggested fixes that can be applied with a single click.

Why does this matter? When researching the code generation category, we found that teams were hungry for additional functionality that enabled them to bring their code to production. By supporting Infrastructure as Code (IaC) languages such as CloudFormation, CDK, and Terraform HCL, CodeWhisperer expands its reach beyond traditional development roles. This advancement is pivotal in merging runtime and infrastructure code into a cohesive unit, significantly enhancing productivity and collaboration in the development process. The inclusion of IaC enables a broader range of professionals, especially Site Reliability Engineers (SREs), to actively engage in application development, automating and optimizing infrastructure management tasks more efficiently.

Unstructured Raises $40M Series B From Menlo Ventures, Databricks Ventures, IBM Ventures and Nvidia to Make Enterprise Data LLM-Ready

Unstructured technology has emerged as a critical piece of infrastructure not only to deliver LLM-ready data to vector databases but also for driving performance improvements of more than 20% across LLM applications without any customization. Unstructured’s open source library has been downloaded more than 6 million times, is used by more than 12,000 code bases, and more than 45,000 organizations, including more than one third of the Fortune 500, are using Unstructured to preprocess their proprietary data.

Why does this matter? Extracting and parsing unstructured data is critical for training and fine-tuning models in addition to RAG architectures. The real-time, continuous data access that Unstructured provides means that LLMs are kept up to date, have access to knowledge specific to organizations, and are less prone to hallucinations. Unstructured enables data scientists to pre-process data at scale so they spend less time on data prep and more time on building. Unstructured is for unstructured data what Fivetran is for structured data.

📰 Content

AI-Aided Coding?

Last week, TNS VoxPop asked, “How much time do you save per week by using an AI-powered coding assistant?” Over 50% still aren’t using a coding assistant, but 12% say they are saving 5+ hours a week by using assistants such as GitHub Copilot or JetBrains AI Assistant.

Why does this matter? GitHub Copilot is the most popular coding assistant. In October 2023, Microsoft stated that they have over 1 million paying copilot users in more than 37,000 organizations that subscribe to copilot for business. Adoption of AI-enabled developer tools is in the early innings. As we saw with product announcements this week, a big step forward will be more than an assistant that lowers activation energy to complete a task, but agents that take over entire development flows. We expect engineers to be using multiple AI-enabled tools, picking the best one in the moment for their language or task.

thenewstack.io

How Figma’s Databases Team Lived to Tell the Scale

Figma’s database stack has grown almost 100x since 2020. This is a good problem to have because it means our business is expanding, but it also poses some tricky technical challenges. Over the past four years, Figma made a significant effort to stay ahead of the curve and avoid potential growing pains. In 2020, they were running a single Postgres database hosted on AWS’s largest physical instance, and by the end of 2022, they had built out a distributed architecture with caching, read replicas, and a dozen vertically partitioned databases. They split groups of related tables — like “Figma files” or “Organizations” — into their own vertical partitions, which allowed us to make incremental scaling gains and maintain enough runway to stay ahead of our growth.

Why does this matter? Figma is one of the fastest growing SaaS companies of all time. They considered alternatives like NoSQL and horizontally sharded databases compatible with Postgres, but found the migration would be too challenging. This underscores the pain of database migrations. Instead they built their own system for horizontal sharding and went into the details of the technical choices along the way.

Did OpenAI Just Accidentally Leak the Next Big ChatGPT Upgrade?

OpenAI may have accidentally leaked details about a new AI model called GPT-4.5 Turbo. The leak suggests that GPT-4.5 Turbo will be faster, more accurate, and have a larger knowledge base than its predecessor. The leaked snippet mentions a “knowledge cutoff” of June 2024. This has led some to believe it’s either a typo or a sign of a potential July/August release for GPT-4.5 Turbo. For context, the current GPT-4 Turbo model had a knowledge cutoff of April 2023. Another critical piece of information is the mention of a 256k token context window, doubling the 128k capacity of GPT-4 Turbo.

Why does this matter? While all speculation, the details of the knowledge cut off suggests that GPT-4.5 Turbo is still in development. We are seeing models become increasingly real-time. The context window could also potentially expand. It is interesting to imagine a world of millions of context length and the tools needed for this reality. As investors we are constantly thinking about the enduring layers of the AI infrastructure, particularly when it is changing so quickly.

💼 Jobs

⭐️DragonflyDB React Tech Lead — Dragonfly Cloud

⭐️ChromaMember of Technical Staff

⭐️SpeakeasyProduct Engineer

Views expressed in posts and other content linked on this website or posted to social media and other platforms are my own and are not the views of Felicis Ventures Management Company, LLC. The posts do not constitute investment, legal, tax, or other advice and do not constitute an offer to invest in any security.

--

--

Astasia Myers
Memory Leak

General Partner @ Felicis, previously Investor @ Redpoint Ventures, Quiet Capital, and Cisco Investments