.png)
Belgium 10-3-26 All Members Physical english
From modular business design to AI-driven pipelines, architectures, and operationsA composable enterprise is built on modular processes, API-driven ecosystems, low-code platforms, and cloud-native services. It promises speed and adaptability by allowing organisations to reconfigure their capabilities as conditions change. However, modular design alone does not guarantee resilience; the way these systems are engineered and operated is just as important.This is where AI is beginning to make a difference. Beyond generating snippets of code, AI is already influencing how entire systems are developed and run: accelerating CI/CD pipelines, improving test coverage, optimising Infrastructure-as-Code, sharpening observability, and even shaping architectural decisions. These changes directly affect how quickly new business components can be deployed, connected, and retired.In this session, we will examine how CIOs can bring these two movements together:Composable design is the framework for flexibility and modularity.AI-augmented engineering is the force that delivers the speed, quality, and intelligence needed to sustain it.The pitfalls of treating them in isolation: composability that collapses under slow engineering cycles, or AI that only adds complexity without a modular structure.The discussion goes beyond concepts to practical implications: how to architect organisations that can be recomposed at speed, without losing control or reliability. The outcome is an enterprise that is not only modular in design but also engineered to adapt continuously under real-world conditions.
Read More
Belgium 12-3-26 Physical english
Tomato! Tomato! Tomato! Get your tomato now! Every vendor sells security. And every company depends on vendors, partners, and suppliers. The more digital the business becomes, the longer that list grows, and so does the attack surface. One weak link, and there is always one, or one missed update, and trust collapses faster than any firewall can react. What used to be a procurement checklist has become a full-time discipline. Questionnaires, audits, and endless documentation prove that everyone’s “compliant,” yet incidents keep happening. So it’s clear: the issue isn’t lack of policy, or maybe a bit, but mostly lack of visibility. Beyond a certain point, even the most secure organisation is only as safe as its least prepared partner (or an employee who hadn’t had their morning coffee). So how far can you trust your vendors? How do you check what you can’t control? And when does assurance become theatre instead of protection? Does it come at a different cost? Let’s exchange what works and what fails in third-party risk management: live monitoring, shared responsibility models, contractual levers, and the reality of building trust in a chain you don’t own. A closed conversation for those redefining what partnership means when risk is shared but accountability isn’t.
Read More
Belgium 19-3-26 Country Members Physical french
Moins de Partenaires : La consolidation vaut-elle le risque ? Le problème est la prolifération des fournisseurs : trop d'outils causant de la complexité, une taxe d'intégration paralysante et de la redondance. La Taxe d'Intégration est le coût caché (en temps, en échecs et en ressources) d'essayer de faire fonctionner ensemble des systèmes disparates. Cet échange se concentre sur des stratégies éprouvées pour simplifier de manière agressive le parc technologique, consolider les fournisseurs et élever certains fournisseurs clés au rang de partenaires stratégiques.
Read More
March 12, 2026 Squad Session Invitation Only Physical english
Tomato! Tomato! Tomato! Get your tomato now! Every vendor sells security. And every company depends on vendors, partners, and suppliers. The more digital the business becomes, the longer that list grows, and so does the attack surface. One weak link, and there is always one, or one missed update, and trust collapses faster than any firewall can react.
Read More
March 24, 2026 Squad Session Invitation Only Physical english
Every organisation has them, projects that keep running long after their purpose has faded. No one remembers who asked for them, but shutting them down feels riskier than keeping them alive. And eventually, people stay assigned, budgets stay allocated, and energy drains into work that no longer matters. Inertia at its finest.
Read More
March 26, 2026 Squad Session Invitation Only Physical english
AI projects continue to multiply, but proving their value remains difficult. Most organisations can track activity, not impact. Dashboards count pilots and models, yet few translate to measurable business outcomes. The result is familiar: success stories without clarity on what they actually delivered.
Read More
CIONET Trailblazer: CISO: The Shift from Prevention to Resilience: Turning Visibility into Execution
Published on: January 28, 2026 @ 9:48 AM
CIONET Trailblazer: AI Transformation: Bridging the Cultural Divide to Achieve Competitive Advantage
Published on: December 17, 2025 @ 9:16 AM
How Cohere is accelerating language model training with Google Cloud TPUs
Cohere is accelerating LLM training with Google Cloud TPUs to provide larger and more accurate LLMs to developers.

Machine Learning Engineer, Cohere
Sr. Product Manager
Over the past few years, advances in training large language models (LLMs) have moved natural language processing (NLP) from a bleeding-edge technology that few companies could access, to a powerful component of many common applications. From chatbots to content moderation to categorization, a general rule for NLP is that the larger the model, the greater the accuracy it’s able to achieve in understanding and generating language.
But in the quest to create larger and more powerful language models, scale has become a major challenge. Once a model becomes too large to fit on a single device, it requires distributed training strategies, which in turn require extensive compute resources with vast memory capacity and fast interconnects. You also need specialized algorithms to optimize the hardware and time resources.
Cohere engineers are working on solutions to this scaling challenge that have already yielded results. Cohere provides developers a platform for working with powerful LLMs without the infrastructure or deep ML expertise that such projects typically require. In a new technical paper, Scalable Training of Language Models using JAX pjit and TPUv4, engineers at Cohere demonstrate how their new FAX framework deployed on Google Cloud’s recently announced Cloud TPU v4 Pods addresses the challenges of scaling LLMs to hundreds of billions of parameters. Specifically, the report reveals breakthroughs in training efficiency that Cohere was able to achieve through tensor and data parallelism.
This framework aims to accelerate the research, development, and production of large language models with two significant improvements: scalability and rapid prototyping. Cohere will be able to improve its models by training larger ones more quickly, delivering better models to its customers faster. The framework also supports rapid prototyping of models that address specific objectives — for example, creating a generative model that powers customer-service chatbot — by experimenting and testing new ideas. The ability to switch back and forth among model types and optimize for different objectives will ultimately allow Cohere to offer models optimized for particular use cases.
The FAX framework relies heavily on the partitioned just-in-time compilation (pjit) feature of JAX, which abstracts the relationship between device and workload. This allows Cohere engineers to optimize efficiency, and performance by aligning devices and processes in the ideal configuration for the task at hand. Pjit works by compiling an arbitrary function into a single program (an XLA computation), that runs on multiple devices — even those residing on different hosts.
Cohere’s new solution also takes advantage of Google Cloud’s new TPU v4 Pods to perform tensor parallelism. which is more efficient than the earlier pipeline parallelism implementation. As the name suggests, the pipeline parallel approach uses accelerators in a linear fashion to scale a workload, like a single long assembly line. Accelerators must process each micro-batch of data before passing it along to the next one, and then run the backward pass in reverse order.
Tensor parallelism eliminates the accelerator idle time of pipeline parallelism, also known as the pipeline bubble. Tensor parallelism involves partitioning large tensors (mathematical arrays that define the relationship among multiple objects such as the words in a paragraph) across accelerators to perform computations at the same time on multiple devices. If pipeline parallelism is an ever-lengthening assembly line, tensor parallelism is a series of parallel assembly lines — one making the engine, the other the body, etc. — that simultaneously come together to form a complete car in a fraction of the time.
These computations are then collated, a process made practical thanks to Google Cloud TPU v4 VMs, which more than double the computational power of their v3 predecessors. The superior performance of v4 chips has enabled Cohere to iterate on ideas and validate them 1.7X faster in computation than before.
Aidan Gomez, CEO and co-founder, Cohere
As part of a multiyear technology partnership, Cohere leverages Google Cloud’s advanced AI and ML infrastructure to power its platform. Cohere develops and deploys its products on Cloud TPUs, Google Cloud’s custom-designed machine learning chips that are optimized for large-scale ML. Cohere’s recently announced their new model improvements and scalability by training an LLM using FAX on Google Cloud TPUs, and this model has demonstrated that transitioning from TPU v3 to TPU v4 has so far enabled them to achieve a total speedup of 1.7x . In addition to a significant performance boost, TPUs provide an excellent user experience with the new TPU VM architecture. Importantly, Google Cloud ensures that Cohere's state-of-the-art ML training is achieved with the highest standards of sustainability, powered by 90% carbon-free energy in the world's largest publicly available ML hub.
By adopting Cloud TPUs, Cohere is making LLM training faster, more economical, and more agile. This helps them provide larger and more accurate LLMs to developers, and put NLP technology in the hands of developers and businesses of all sizes.
To learn more about these LLM training advances, you can read the full paper, Scalable Training of Language Models using JAX pjit and TPUv4. To learn more about Cohere's best practices and AI principles, you can check this article co-authored with Open AI and AI 21 Labs.
548 Views 0 Likes Read More
Digital Transformation is redefining the future of health care and health delivery. All stakeholders are convinced that these innovations will create value for patients, healthcare practitioners, hospitals, and governments along the patient pathway. The benefits are starting from prevention and awareness to diagnosis, treatment, short- and long-term follow-up, and ultimately survival. But how do you make sure that your working towards an architecturally sound, secure and interoperable health IT ecosystem for your hospital and avoid implementing a hodgepodge of spot solutions? How does your IT department work together with the other stakeholders, such as the doctors and other healthcare practitioners, Life Sciences companies, Tech companies, regulators and your internal governance and administrative bodies?
Read More
The Telenet Business Leadership Circle powered by CIONET, offers a platform where IT executives and thought leaders can meet to inspire each other and share best practices. We want to be a facilitator who helps you optimise the performance of your IT function and your business by embracing the endless opportunities that digital change brings.
Read More
Découvrez la dynamique du leadership numérique aux Rencontres de CIONET, le programme francophone exclusif de CIONET pour les leaders numériques en Belgique, rendu possible grâce au soutien et à l'engagement de nos partenaires de programme : Deloitte, Denodo et Red Hat. Rejoignez trois événements inspirants par an à Liège, Namur et en Brabant Wallon, où des CIOs et des experts numériques francophones de premier plan partagent leurs perspectives et expériences sur des thèmes d'affaires et de IT actuels. Laissez-vous inspirer et apprenez des meilleurs du secteur lors de sessions captivantes conçues spécialement pour soutenir et enrichir votre rôle en tant que CIO pair. Ne manquez pas cette opportunité de faire partie d'un réseau exceptionnel d'innovateurs numériques !
Read More
CIONET is committed to highlighting and celebrating female role models in IT, Tech & Digital, creating a leadership programme that empowers and elevates women within the tech industry. This initiative is dedicated to showcasing the achievements and successes of leading women, fostering an environment where female role models are recognised, and their contributions can ignite progress and inspire the next generation of women in IT. Our mission is to shine the spotlight a little brighter on female role models in IT, Tech & Digital, and to empower each other through this inner network community.
Read More
Would you like to know more about CIONET Belgium, membership or partnership opportunities? Do you have feedback or any other question? Send us a message!
You can either send us a registered handwritten letter explaining why you'd like to become a member or you can simply talk to us right here!