Skip Navigation
Play Video

Sovereign AI Data Centers in Canada

Listen
Share
LinkedInFacebookTwitter

Episode Summary

Saeed Shamsi, Director of Engineering and AI Factory Lead at TELUS, joins Mudassar Malik on Behind the Growth to break down what it takes to build enterprise AI infrastructure that can ship fast, compliant, and within Canada’s regulatory boundaries. He opens by grounding the conversation in TELUS’s evolution beyond telecom, and why its expansion into data-driven businesses made sovereign AI a practical necessity, not a future bet.

Saeed explains why TELUS refers to its infrastructure as an “AI factory,” describing it as a system that turns power, cooling, and compute into usable intelligence at scale. He walks through how TELUS retrofitted existing Canadian data centres with GPU clusters designed to support model training, fine-tuning, inferencing, and modern AI applications without forcing customers into a single model, stack, or hyperscaler dependency.

The discussion then shifts to the problem facing Canadian enterprises: the gap between Canada’s AI talent and its lack of domestic compute. Saeed outlines why data residency, jurisdictional control, and regulatory compliance are critical for sectors like healthcare and financial services. He also explains how TELUS is addressing these constraints while giving startups and enterprises room to move, iterate, and retain their IP.

To close, Saeed gets specific about architecture and execution, from adopting NVIDIA’s reference designs to validating performance through global benchmarks. He looks ahead to scaling sustainable, sovereign AI infrastructure so Canadian organizations can compete globally without exporting their data, decisions, or innovation.

Featured Guest

  • Name: Saeed Shamsi
  • What he does: Director of Engineering and AI Factory Lead
  • Company: TELUS
  • Noteworthy: As the Telus AI Factory Lead, Saeed Shamsi spearheads the development of one of Canada’s largest sovereign AI data centers, leveraging cutting-edge NVIDIA technology. His role centers on building a robust AI ecosystem that empowers innovation across the country. He focuses deeply on ensuring the scalability of the infrastructure to handle growing AI demands, while prioritizing security to protect sensitive data within this sovereign framework. Additionally, he champions sustainability initiatives to minimize environmental impact, creating a future-proof AI solution that supports Canada’s leadership in advanced technology and responsible digital transformation.

Connect on Linkedin

Key Insights

Sovereign AI infrastructure is an execution constraint, not a policy preference
AI strategy breaks down without domestic compute to support it. Canada is described as rich in AI talent but significantly behind in available infrastructure, forcing organizations to run workloads outside the country. In regulated sectors like healthcare and financial services, this introduces jurisdictional risk and compliance barriers, sometimes down to provincial data residency requirements. AI initiatives can stall regardless of model quality if infrastructure decisions are misaligned. Infrastructure location, ownership, and regulatory alignment emerge as foundational requirements for deploying AI at scale, not secondary considerations to be addressed after pilots are already underway.

AI platforms must support diverse workloads without locking organizations into a single stack
Most enterprises are not training large foundational models, but they still require infrastructure that can support fine-tuning, inferencing, and modern application patterns such as retrieval-augmented generation and agent-based systems. Saeed highlights the risk of tightly coupling AI workloads to specific backends or vendors, which limits flexibility as use cases evolve. He reinforces that AI platforms should be assessed on adaptability, not just raw performance. Infrastructure that supports multiple workload types and model choices allows teams to move faster, iterate safely, and avoid re-architecting as business needs change.

Validated performance and sustainability are table stakes for enterprise AI credibility
Operating AI infrastructure at scale demands proof, not claims. Saeed emphasizes the importance of independently validated benchmarks to demonstrate that systems can deliver consistent performance across training, fine-tuning, and inferencing workloads. External rankings also provide credibility that engineering decisions hold up against global standards. At the same time, sustainability is treated as inseparable from performance, given the power demands of AI data centers. Infrastructure choices must satisfy both operational rigor and environmental responsibility to remain viable under regulatory, financial, and reputational scrutiny.

In Canada, we are talent rich, but infrastructure poor.

Episode Highlights

Why It’s an AI Factory

Saeed explains the reasoning behind the term “AI factory” by grounding it in physical inputs and outputs, not branding. He frames AI infrastructure as a system that consumes power and cooling and produces intelligence, making the concept tangible and operational rather than abstract.

“Why we call it AI factory because factory, have some raw materials coming and then you have the finished goods out. Same topics for our AI factory. The raw materials coming in this case is going to be our power. And then sometimes for the cooling, we have water as well… and the finished goods is going to be intelligent.”

Canada: Talent Rich, Compute Poor

Reframing Canada’s AI challenge as an infrastructure problem, not a talent problem, Saeed contrasts world-class AI researchers with a lack of domestic compute, positioning infrastructure scarcity as a hard constraint on national and enterprise AI progress.

“In Canada, that is realistic. We are talent rich, but infrastructure or compute poor. So when you compare the amount of the infrastructure, AI infrastructure available in Canada compared to G7, we are dead last.”

Data Must Never Cross Borders

Saeed is explicit about why sovereignty is not optional for certain industries. He ties data residency directly to regulation and jurisdiction, especially for healthcare and other regulated sectors where data location is legally constrained.

“We want to guarantee that the data is going to reside in Canada and never cross the border.”

Training Without a Track

Using a practical analogy, Saeed explains why talent alone is insufficient without infrastructure. The comparison makes clear that AI capability requires an environment to train, test, and scale, not just skilled people.

“If you have the best engine and you want to train the best, the racing car drivers without having a track, you cannot train them.”

Get new Behind the Growth episodes — right in your inbox

By submitting this information, you agree to receive episode updates from the Behind the Growth podcast. We take your privacy seriously, keep the information you share confidential, and never send any unwanted emails. Check out our privacy policy to learn how we use your details.

Thank You!

We have sent you a confirmation email.
Please check your inbox.

More Episodes

chatsimple