With all the rapid advancements in machine learning and AI, it can feel like we’re constantly playing catch-up. Over the last two Civo Navigate conferences, Berlin 2024 and San Francisco 2025, Civo brought together leading experts to discuss the future of AI, machine learning, and the growing challenges and opportunities for developers and businesses.
During these sessions, Civo’s Chief Innovation Officer, Josh Mesout, sat down with panelists to delve into the latest AI trends, their practical applications in the tech industry, and key topics such as deep learning advancements, data privacy, ethics, and the evolving dynamics of human-machine collaboration.
- Civo Navigate San Francisco 2025: AI & ML Experts Reveal the Future – What’s Next for Innovation?
- Civo Navigate Berlin 2024: The Future of AI and ML: Are You Ready to Navigate the Change?
In this blog post, we’ll highlight some of the key takeaways from these discussions and catch you up on what’s been happening in the world of AI over the last few months.
Current trends
Although things are constantly changing within the AI industry, some clear trends have emerged that were central topics during these discussions:
Multimodal AI
If you’re not familiar, multimodal AI is a type of artificial intelligence that can process and integrate multiple types of input data, like images, audio, and text, all at once. Matching how users naturally interact with technology. As Nami Baral, Founder of Niural, explained:
“Now your agents and models in general have expanded beyond just text or reasoning. You see voice-based agents…and image-based agents as well.” She added, “The combination of all of these different advancements in AI models will actually create something that is the closest replica of a labor substitute.”
The implications of multimodal AI are far-reaching, with potential applications in industries such as healthcare, finance, and education. However, as we develop and deploy multimodal AI systems, we must also consider the potential challenges and limitations. For instance, how will we ensure that these systems are fair and unbiased, particularly when they're processing multiple types of input data?
AI agents
You’re probably tired of hearing about them, but AI agents continue to be one of the biggest use cases in the field. This momentum is largely driven by the demand for automation, whether it’s assisting with software engineering workflows or handling everyday, repetitive tasks Jimil Patel, Head of Product Marketing at Intuit, described his ideal personal assistant during the panel discussion by saying:
“Every morning I like to read Google News, Hacker News, and a few podcasts. I want an agent to do that for me and give me a summary, picking up the things that need my attention.” He went on to emphasize the broader impact by saying, “These agents give the power of AI in anyone’s hands rather than only developers or corporate workers.”
The big takeaway here is that by integrating multiple modalities, multimodal AI can provide a more comprehensive understanding of the data, while agents can provide comprehensive summaries and aggregate information from different sources that might take a good amount of time to gather.
The enthusiasm for AI agents reflects their potential to revolutionise the way we work and live. By automating routine tasks, AI agents can free up time for more creative and high-value tasks, leading to increased productivity and efficiency. Moreover, AI agents can democratise access to AI technology, making it more accessible to a wider range of people, not just developers and corporate workers.
However, as we develop and deploy AI agents, we must also consider the potential risks and challenges. For instance, how will we ensure that these agents are transparent, explainable, and fair? Moreover, what are the potential implications for jobs and industries that rely heavily on routine tasks? To mitigate these risks, it's essential to develop AI agents that are designed with transparency and accountability in the development process.
The potential for AI agents to revolutionise the way we work and live is vast, and it's essential to continue exploring their applications and potential benefits. By doing so, we can unlock new opportunities for growth, innovation, and progress.
Anomaly detection
One of the most practical and mature applications of AI in cybersecurity right now is anomaly detection. At its core, this means training AI models to understand what “normal” behavior looks like across systems—whether that’s network traffic, user logins, or file access patterns—and then flagging anything that looks suspicious.
Tristan Cormier, CIO at the California Secretary of State, noted:
“Engineers love to look at the data and automate their security processes. They’re really interested in AI formalizing the search for security issues and going through a lot of data… What’s the governance, and how will that be implemented before we can go to the tool?”
The use of AI in threat detection is an interesting conversation, advancements in the Linux Kernel have birthed technologies such as eBPF and allow for not only real-time detection but also stopping threats at the kernel level. Applying AI to these existing developments can make for a more comprehensive threat and anomaly detection strategy as machine learning algorithms get better at identifying what is considered “normal” for your system and consequently threats.
However, as Tristan Cormier pointed out, the implementation of AI-driven security solutions requires careful consideration of governance and policy. This includes ensuring that AI models are transparent, explainable, and fair, and that they are integrated into existing security processes in a way that is consistent with organizational policies and procedures. Furthermore, organizations must also consider the potential risks and challenges associated with AI-driven security solutions, such as the potential for bias in AI models or the need for ongoing training and maintenance.
Is AI augmenting or replacing humans?
The debate around whether AI is replacing humans is reminiscent of the early days of the internet. Sure, some jobs were rendered obsolete, but countless others were enhanced. Entire industries like media entered a new era, reaching global audiences thanks to unprecedented connectivity. Artists could distribute and monetize their work more easily, bypassing traditional gatekeepers.
In much the same way, AI has the potential to support, rather than replace, many existing professions if implemented thoughtfully.
An interesting point raised by the panellists was whether AI is doing more replacing than augmenting. The general consensus… We’re seeing far more augmentation than outright replacement, especially in areas where human oversight is critical and the margin for error is razor-thin.
As Florian from Gina AI noted during a similar discussion, "they're all kind of um augmentation type use cases right they're not where you're leaving the large language model to be independent it's actually a lot of that's human supervision". This sentiment is echoed by Camila Lomana, who emphasized the importance of having a "human in the loop" due to regulatory requirements and the limitations of AI technology.
Tristan Cormier, who works for the California Secretary of State, highlighted this point by talking about the sensitivity of government workflows. In areas like policy processing and public data systems, there is simply no room for AI to make mistakes. When speaking about the sensitive nature of the work done by the Secretary of State, Tristan explained that:
“We don’t have the opportunity to get it wrong.”
While AI can help in these areas, human oversight is needed to ensure misinformation does not slip through the cracks. The takeaway: AI isn’t taking over. It’s becoming the assistant, not the boss. And that balance isn't changing anytime soon for many industries, especially high-stakes ones.
The effects of AI augmentation can vary across various industries, resulting in different outcomes. On the medical front, AI in healthcare, AI can be used to spot recurring patterns when making a diagnosis. In customer service, AI-powered chatbots are becoming increasingly prevalent, providing 24/7 support with added context, thereby providing more meaningful responses as opposed to the traditional chatbots that provide responses based on a specific set of inputs.
Understanding the nuances of AI's impact on different industries can provide valuable insights into its potential applications and limitations.
As AI continues to evolve, it's vital to consider the potential risks and challenges associated with its adoption. A huge concern is transparency and how we ensure it is explainable, and fair… Plus, how do we address potential biases in AI decision-making? By addressing these questions, we can ensure that AI is developed in a manner that is fair and address real concerns.
Impact of open source models on advancements in AI
With companies like DeepSeek open-sourcing their large language models, there has been a growing debate about whether this openness is ultimately beneficial for the progression of AI. On one side, there are concerns around safety, misuse, and potential competitive threats. But on the other, there’s a strong case for collaboration driving innovation.
Gaurav Bharaj made an important point: More eyes on a model means more opportunities for improvement. When we work together, we learn faster and that collective intelligence is exactly what’s fueled so many of the open-source projects we rely on today. Without a community committed to experimentation, review, and iteration, a lot of the tech we take for granted simply wouldn’t exist in the form it does now.
As Florian noted, "why would I go pay for a model when I can have one for free of comparable quality", highlighting the value proposition of open-source models. However, Camila also pointed out that:
"There's some open-source level, but it's not really on par with the source. And because of that, the risks that they can contribute are also significant, because we don't know how they're being built on top, and they don't have an accountability framework for the building of these open-source models."
The benefits of open-source models are clear: they can accelerate innovation, improve transparency, and facilitate collaboration. By making AI models and datasets openly available, researchers and developers can build upon each other's work, leading to faster progress and more robust solutions.
However, Open-source datasets bring a more nuanced discussion. The key question is how those datasets were obtained. If the data sourcing isn’t ethical, transparent, or legally sound, then no amount of openness can make it right.
Tristan Cormier highlighted this by saying “It’s beyond just the tech”, referring to how the conversation around the governance of datasets and model training goes beyond technology as it's increasingly a legal and regulatory issue. This isn’t just about how we build AI systems, but about the frameworks we use to hold them accountable.
In addition, it's essential to develop guidelines and best practices for open-source AI development, including transparent data sourcing, robust testing and validation, and ongoing monitoring and evaluation. By doing so, we can ensure that open-source AI models are not only innovative but also responsible and trustworthy.
The open-source AI movement is driving innovation and collaboration, but it also raises important questions about accountability and governance. As the field continues to evolve, it is likely that we will see a growing emphasis on developing frameworks and guidelines that balance the need for openness and collaboration with the need for responsible AI development. By working together, we can create a future where AI is developed and deployed in a way that benefits humanity.
Is there a right way to adopt agentic AI?
When it comes to AI agents, one of the most important questions is: Is there a right way to adopt them? It’s not just about what the technology can do it’s about whether you actually need it in the first place.
Nami Baral echoed a key point originally made by Kelsey Hightower in his talk: before jumping into building an AI solution, ask yourself if your business truly needs one. Chasing hype without a clear use case can lead to over-engineering and wasted effort.
Adopting AI also calls for more care in regions where new regulations, like the EU's AI Act, which seeks to establish risk-based rules to ensure AI is developed and used safely and ethically.
How to comply with such regulations when resources are limited was a focal point during a panel at Civo Navigate Europe 2024. Civo's Chief Innovation Officer, Josh Mesout, specifically asked for "advice [they'd] give to the members of the audience who have to adopt these new regulations but may not have the width of resources to be able to do so at scale”
In response, panellist Florian Hoenicke observed that "it's good that the AI act regulates the use cases and not the technology itself, "suggesting the focus on application might offer a clearer path. Nikita Loik expanded on this, highlighting the need for organisational support rather than just technical skill: "we cannot put more pressure on Engineers to learn suddenly about human rights... that's a whole field on itself but the the teaching of collaboration and the sponsorship that has to come from leadership in order to expand this cycle."
The discussion around adopting agentic AI highlights the importance of a holistic approach that considers the specific needs of the business, the regulatory landscape, and the organizational capabilities. By doing so, companies can ensure a successful adoption of AI agents that is not only technically sound but also aligned with their overall strategy and goals.
To achieve this, companies should prioritize leadership buy-in, collaboration, and a deep understanding of the potential risks and benefits associated with AI adoption. This includes investing in employee training and development, establishing clear governance and oversight structures, and fostering a culture of transparency and accountability.
By taking a thoughtful and multi-faceted approach to AI adoption, companies can unlock the full potential of AI agents and drive business success.
Are we AGI yet?
Long before the recent surge of Large Language Models (LLMs), the discussion around Artificial General Intelligence (AGI) has always lingered. Apple’s co-founder Steve Wozniak, famously expressed skepticism about machines achieving true, human-like consciousness or understanding anytime soon. This begs the question: What is AGI, anyway? And how close are we to achieving it?
Briefly defined, Artificial General Intelligence (AGI) refers to a hypothetical type of AI possessing the ability to understand, learn, and apply knowledge across a wide range of intellectual tasks at a level comparable to, or even exceeding, that of a human being. Unlike today's "narrow" AI, which excels at specific tasks it's trained for, AGI would exhibit flexible, adaptable, and generalised cognitive abilities.
However, actually achieving AGI is only part of the challenge; simply defining it in a measurable way proves difficult. As the panel discussion touched upon, this lack of consensus is significant. Gaurav Bharaj reminded the audience that Microsoft defines AGI somewhat pragmatically, suggesting it's potentially “a system capable of generating $100 billion in profits”. This speaks to part of a larger conversation surrounding AGI: what does it actually mean in practice? As it stands, we lack a universally clear definition or benchmark for this highly anticipated and potentially transformative stage of AI development.
The pursuit of AGI raises fundamental questions about the future of AI research and its potential impact on society. As we continue to push the boundaries of what is possible with AI, it is vital to consider the potential impact of reaching AGI as far as it may seem.Most especially the risks before the benefits.
For instance, AGI could potentially revolutionize industries such as healthcare, finance, and education, but it also raises deeper concerns about bias, accountability and if we need standards for what can be considered AGI.
Furthermore, as we continue to develop and refine AI technologies, it is crucial to prioritize transparency, explainability, and accountability in AI decision-making. This includes developing more robust guidelines and regulations for the development and deployment of AGI when that eventually happens.
Conclusion
As AI has so many potential applications, keeping up can be a challenge when it seems like a new product or model is released every other week. This post went over some of the recurring trends and future predictions for the coming months in AI.
The discussions at Civo Navigate underscore the complexity and various sides of the issues surrounding AI. As we continue to push the boundaries of what is possible with AI, it's vital to consider the potential implications of its development and deployment.
Keep learning at Civo Navigate
Civo Navigate is your destination for cutting-edge talks and workshops focused on cutting-edge technology, cloud innovation, and the future of AI.
👉 Find out more about upcoming events!This includes addressing concerns around potential job displacement, bias, and accountability, as well as ensuring that AI is developed and used in a way that is transparent, explainable, and fair.
If you missed the panel discussion at Civo Navigate, you can watch a replay on YouTube. Otherwise, check out these posts next if you are interested in Machine learning and AI: