The "Context" space is fascinating... I think the big question is "Who," "What," and "How" will startups and existing infrastructure players capitalize on the opportunity to harvest, operationalize, and optimize Context... What does that layer look like... Does it complement, integrate, or replace existing semantic layers and metadata?
Here are a few Substacks and articles...For example, your comment:
"Context is the combination of inputs, intent, constraints, history, permissions, exceptions, and outcomes that surround every real enterprise decision."
Made me think about the Enterprise Context Management Substack; specifically, this article about a hierarchy and organization of context and how each level can be curated, why they're different, and governance for each, and organizes them into a conceptual framework: "The Enterprise Context Skyscraper": https://enterprisecontextmanagement.substack.com/p/skyscraper
Regarding Ivan's post, I highly recommend Power and Prediction by Agrawal, Gans and Goldfarb. They really get into the idea that like electrification (and other innovations), this is a major structural change and that we will eventually completely rearchitect how our systems are built. It will take years (decades?) to pull off. This means building from the ground up rather than stapling. We are starting to see this in a few businesses: AIFleet and Range Financial to name a couple. There will be fundamental changes to business models and margin profiles.
Regarding memory. Context is everything and I could not agree more that processes behind decision making will be the driving force. The key is introducing the right data into the process at the right time or in combination with other key data that adds information. This is defined by access and a lot of enterprise data is locked behind walls (think Salesforce). Humans make decisions by verifying data across silos and applying them to a structured process designed by humans. We have built software systems to help us organize and find that key data through databases and querying. The key is building processes for software/agents to verify data for agentic decision making processes so that they can make the decisions. Furthermore, providing access to the right data, in the right place at the right time will be key. The current system just doesn't do that and like with the introduction of electrification building "belts" to connect the power source to the function or machine just won't cut it. Electrical energy gave us new ways to build that were major improvements over mechanical energy. We need to "centralize" the data and bring everything to it. Centralization may be the key to abstraction.
Incumbent Saas vendors with large datasets (Salesforce, Workday, ServiceNow and others) will be hesitant to rearchitect systems in such a way where they lose control. Enterprise customers may come to demand access to and control of their data from their vendors when they realize it could impact financial performance. This will be a hard transformation and some will pull it off like Adobe did when they made the painful transition to Saas, but others will fail because they are stubborn.
One framework that seems to be gaining ground that could be part of the solution is BYOC. Enterprises will come to demand control of their data and who is invited to the party. I see a decoupling of control/compute/execution from the data plane- this has been happening with Iceberg and the Lakehouse where the data and compute are being decoupled. As mentioned above "centralization" is an abstraction layer and it may look something like the Lakehouse with multimodal data models and querying. Further, it may include the abstraction of the database - give AI access to the object store and the options and let the agent choose the database(s) best meant to build the functionality to make the decisions. This is already happening with Postgres. It would seem that the engines, pipelines, security, access control and verification and other infrastructure that hasn't been invented yet are key and need to catch up to storage if we are to begin to tear down the Saas silos and free the data for this new paradigm.
I loved Agrawal's electricity framing as well; according to it, we have 38 more years of experimenting until we figure out how to rewire the enterprise for AI :)....
What evidence do you see that the "BYOC (Bring Your Own Context) Framework" is gaining ground? Are there resources you can share about how it's being adopted? I just replied below with resources about specific startups (AI One, PlayerZero, Arize, Ordinant) that are building specific pieces of infrastructure to extract, manage, catalog, govern, and authenticate context... It's still a new frontier.
Would love to learn more about what you're seeing!
A number of companies are starting to offer functionality - Nuon, Warpstream, Tensor9 Groundcover, Red Panda, among others. I am bullish on it because with the rise of unstructured and multimodal data and the need to build software that understands context of all your institutional knowledge, the architecture will need to change. Deploying software into your chosen environment where all your data lives will be the way forward. It is early days, but this is the fundemental abstraction - rather than multiple control planes and access points to your data across each Saas offering you utilize for your business functions, there will be one. This is a huge disruption and ultimately I think this is where Databricks and the industry are going in the long term.
Yup, agreed with all of this; I'm on the board of Atscale and Snowflake just did a strategic investment in them precisely to address the importance of Semantic Layers in the AI data stack, which is related to these ideas, of course. Will check out what those companies are doing, I'm very familiar with Red Panda...
The "Context" space is fascinating... I think the big question is "Who," "What," and "How" will startups and existing infrastructure players capitalize on the opportunity to harvest, operationalize, and optimize Context... What does that layer look like... Does it complement, integrate, or replace existing semantic layers and metadata?
Here are a few Substacks and articles...For example, your comment:
"Context is the combination of inputs, intent, constraints, history, permissions, exceptions, and outcomes that surround every real enterprise decision."
Made me think about the Enterprise Context Management Substack; specifically, this article about a hierarchy and organization of context and how each level can be curated, why they're different, and governance for each, and organizes them into a conceptual framework: "The Enterprise Context Skyscraper": https://enterprisecontextmanagement.substack.com/p/skyscraper
The Propagation is another Substack centered around Context (specifically, Authority): "The AI Doesn't Know Who You Are": https://thepropagation.report/p/the-ai-doesnt-know-who-you-are
Then there's the Foundation Capital post about Context Graphs as the "Next Trillion Dollar Opportunity": https://foundationcapital.com/context-graphs-ais-trillion-dollar-opportunity/
Curious to learn about other key resources you /others have found!
Regarding Ivan's post, I highly recommend Power and Prediction by Agrawal, Gans and Goldfarb. They really get into the idea that like electrification (and other innovations), this is a major structural change and that we will eventually completely rearchitect how our systems are built. It will take years (decades?) to pull off. This means building from the ground up rather than stapling. We are starting to see this in a few businesses: AIFleet and Range Financial to name a couple. There will be fundamental changes to business models and margin profiles.
Regarding memory. Context is everything and I could not agree more that processes behind decision making will be the driving force. The key is introducing the right data into the process at the right time or in combination with other key data that adds information. This is defined by access and a lot of enterprise data is locked behind walls (think Salesforce). Humans make decisions by verifying data across silos and applying them to a structured process designed by humans. We have built software systems to help us organize and find that key data through databases and querying. The key is building processes for software/agents to verify data for agentic decision making processes so that they can make the decisions. Furthermore, providing access to the right data, in the right place at the right time will be key. The current system just doesn't do that and like with the introduction of electrification building "belts" to connect the power source to the function or machine just won't cut it. Electrical energy gave us new ways to build that were major improvements over mechanical energy. We need to "centralize" the data and bring everything to it. Centralization may be the key to abstraction.
Incumbent Saas vendors with large datasets (Salesforce, Workday, ServiceNow and others) will be hesitant to rearchitect systems in such a way where they lose control. Enterprise customers may come to demand access to and control of their data from their vendors when they realize it could impact financial performance. This will be a hard transformation and some will pull it off like Adobe did when they made the painful transition to Saas, but others will fail because they are stubborn.
One framework that seems to be gaining ground that could be part of the solution is BYOC. Enterprises will come to demand control of their data and who is invited to the party. I see a decoupling of control/compute/execution from the data plane- this has been happening with Iceberg and the Lakehouse where the data and compute are being decoupled. As mentioned above "centralization" is an abstraction layer and it may look something like the Lakehouse with multimodal data models and querying. Further, it may include the abstraction of the database - give AI access to the object store and the options and let the agent choose the database(s) best meant to build the functionality to make the decisions. This is already happening with Postgres. It would seem that the engines, pipelines, security, access control and verification and other infrastructure that hasn't been invented yet are key and need to catch up to storage if we are to begin to tear down the Saas silos and free the data for this new paradigm.
I loved Agrawal's electricity framing as well; according to it, we have 38 more years of experimenting until we figure out how to rewire the enterprise for AI :)....
What evidence do you see that the "BYOC (Bring Your Own Context) Framework" is gaining ground? Are there resources you can share about how it's being adopted? I just replied below with resources about specific startups (AI One, PlayerZero, Arize, Ordinant) that are building specific pieces of infrastructure to extract, manage, catalog, govern, and authenticate context... It's still a new frontier.
Would love to learn more about what you're seeing!
A number of companies are starting to offer functionality - Nuon, Warpstream, Tensor9 Groundcover, Red Panda, among others. I am bullish on it because with the rise of unstructured and multimodal data and the need to build software that understands context of all your institutional knowledge, the architecture will need to change. Deploying software into your chosen environment where all your data lives will be the way forward. It is early days, but this is the fundemental abstraction - rather than multiple control planes and access points to your data across each Saas offering you utilize for your business functions, there will be one. This is a huge disruption and ultimately I think this is where Databricks and the industry are going in the long term.
Yup, agreed with all of this; I'm on the board of Atscale and Snowflake just did a strategic investment in them precisely to address the importance of Semantic Layers in the AI data stack, which is related to these ideas, of course. Will check out what those companies are doing, I'm very familiar with Red Panda...
Thanks, and good to discuss with you.