| Page 236 | Kisaco Research

California faces over 7,000 wildfires each year, with enormous costs to lives, communities, and ecosystems. Responding faster requires distributed sensing and intelligence that can act in the field where traditional satellites and watchtowers fall short. http://Wywa.ai First Responder is an open-science initiative led with researchers from MIT and CMU, together with industry leaders and policy experts, to design and deploy a scalable wildfire early-warning network. The system combines ultra-low-cost LoRa-enabled chemical sensors with edge AI and vision-language models. These distributed “artificial noses” continuously monitor air for smoke and combustion signatures. When risk thresholds are detected, the sensors activate nearby edge vision systems that confirm wildfire presence and generate real-time alerts for first responders and civic authorities. We will present results from early deployments, highlight the LoRa network architecture and AI model training that make such systems deployable at scale, and discuss how open collaboration across academia, industry, and government can accelerate resilience. The session will include a live demonstration of how edge intelligence can empower communities to act in the earliest, most critical moments of wildfire response.

Autonomy
On-Device ML
Robotics
Industrial Edge

Author:

Anirudh Sharma

Researcher
Amazon Lab126

Anirudh Sharma is a researcher and inventor whose work spans human factors, speech and vision interfaces, and system design. With a research background at the MIT Media Lab and now at Amazon Lab126, he builds novel computing interfaces that merge advanced sensing with real-world applications. His first venture developed and shipped gait-sensing haptic insoles to help elderly and visually impaired people navigate through tactile feedback- now used worldwide. He later co-founded Graviky Labs, which turns air pollution into usable materials. His contributions have earned recognition from MIT Technology Review (TR35), Forbes 30 Under 30, TIME 100, and TED Global.

Anirudh Sharma

Researcher
Amazon Lab126

Anirudh Sharma is a researcher and inventor whose work spans human factors, speech and vision interfaces, and system design. With a research background at the MIT Media Lab and now at Amazon Lab126, he builds novel computing interfaces that merge advanced sensing with real-world applications. His first venture developed and shipped gait-sensing haptic insoles to help elderly and visually impaired people navigate through tactile feedback- now used worldwide. He later co-founded Graviky Labs, which turns air pollution into usable materials. His contributions have earned recognition from MIT Technology Review (TR35), Forbes 30 Under 30, TIME 100, and TED Global.

Author:

Navya Veeturi

Program Founder
Wywa.ai First Responder

Navya Veeturi is the founder of Wywa.ai First Responder, an open initiative focused on protecting communities and forests from wildfires through the power of low-cost sensors, edge AI, and generative intelligence. With a background in leading AI and data engineering teams at NVIDIA, Navya combines technical expertise, product vision, and community impact to build scalable, AI-driven solutions that empower first responders, local leaders, and citizens.

Navya Veeturi

Program Founder
Wywa.ai First Responder

Navya Veeturi is the founder of Wywa.ai First Responder, an open initiative focused on protecting communities and forests from wildfires through the power of low-cost sensors, edge AI, and generative intelligence. With a background in leading AI and data engineering teams at NVIDIA, Navya combines technical expertise, product vision, and community impact to build scalable, AI-driven solutions that empower first responders, local leaders, and citizens.

Data Privacy & Governance
Enterprise Use Case

Author:

Anusha Nerella

Senior Principal Software Engineer
State Street

Anusha Nerella is an award-winning AI and fintech innovator known for her original contributions in transforming institutional trading and digital finance. She has pioneered AI-driven trading strategies, real-time big data systems, and automation frameworks that have redefined how financial institutions operate. Anusha’s innovations—from modernizing Barclaycard’s digital payments infrastructure during COVID-19 to architecting intelligent trading models—have driven measurable impact, earning her recognition as a thought leader shaping the future of AI-powered finance.

Anusha Nerella

Senior Principal Software Engineer
State Street

Anusha Nerella is an award-winning AI and fintech innovator known for her original contributions in transforming institutional trading and digital finance. She has pioneered AI-driven trading strategies, real-time big data systems, and automation frameworks that have redefined how financial institutions operate. Anusha’s innovations—from modernizing Barclaycard’s digital payments infrastructure during COVID-19 to architecting intelligent trading models—have driven measurable impact, earning her recognition as a thought leader shaping the future of AI-powered finance.

What does it take to run one of the world's largest AI supercomputers? As artificial intelligence workloads grow exponentially, operating a hyperscale AI cloud fleet demands new strategies for resilience, efficiency, and operational excellence. This session explores Microsoft’s approach to scaling infrastructure for 100X growth, focusing on the intersection of system innovation and advanced fleet management.

Storage

Author:

Dharmesh Patel

Partner, Manufacturing Quality Engineering
Microsoft

Dharmesh Patel serves as the General Manager and head of the Quality Engineering Organization at Microsoft. In this capacity, he oversees the AI Fleet Quality team to ensure AI capacity, stability, and reliability throughout the hardware supply chain from manufacturing to data centers. His responsibilities include enabling Microsoft to scale AI capacity while maintaining high hardware quality standards across all stages of product development from concept through mass production. With nearly twenty years of experience in managing complex products and promoting process excellence within data centers, Dharmesh is a recognized leader in his field.

Dharmesh Patel

Partner, Manufacturing Quality Engineering
Microsoft

Dharmesh Patel serves as the General Manager and head of the Quality Engineering Organization at Microsoft. In this capacity, he oversees the AI Fleet Quality team to ensure AI capacity, stability, and reliability throughout the hardware supply chain from manufacturing to data centers. His responsibilities include enabling Microsoft to scale AI capacity while maintaining high hardware quality standards across all stages of product development from concept through mass production. With nearly twenty years of experience in managing complex products and promoting process excellence within data centers, Dharmesh is a recognized leader in his field.

Author:

Prabhat Ram

Partner, Software Architect
Microsoft

Prabhat Ram

Partner, Software Architect
Microsoft
Industrial Edge
On-Device ML
Enterprise Use Case

Author:

Prem Theivendran

Director, Software Engineering
Expedera

Prem Theivendran is Director of Software Engineering at Expedera, where he leads the development and productization of Expedera’s software toolchain and SDK. With an expertise in Deep Learning, Prem has held hardware and software design roles at Intel, Cisco, Cavium, and Xpliant. Prem holds a Bachelor of Science in Electrical Engineering and Computer Sciences from the University of California, Berkeley.

Prem Theivendran

Director, Software Engineering
Expedera

Prem Theivendran is Director of Software Engineering at Expedera, where he leads the development and productization of Expedera’s software toolchain and SDK. With an expertise in Deep Learning, Prem has held hardware and software design roles at Intel, Cisco, Cavium, and Xpliant. Prem holds a Bachelor of Science in Electrical Engineering and Computer Sciences from the University of California, Berkeley.

Responsible AI is often framed in terms of ethical models and fair data—but the foundation for responsibility lies in infrastructure. In this talk, we’ll explore how platform-level capabilities like environment isolation, auditability, workload reproducibility, and resource-aware orchestration are essential to delivering AI that’s not just performant, but trustworthy. We’ll also examine how infrastructure decisions directly impact the quality and reliability of model evaluations—enabling teams to catch bias, ensure compliance, and meet evolving governance standards. If you’re building or scaling AI systems, this session will show how infrastructure becomes the enabler of responsible AI at every stage.

Data Privacy & Governance
Enterprise Use Case

Author:

Taylor Smith

Senior AI Advocate
Red Hat

Taylor Smith is a Senior AI Advocate at Red Hat, where she champions open source innovation and the responsible adoption of AI at scale. With a background in software development, Kubernetes, Linux, and technical partnerships, she focuses on helping organizations build and operationalize AI using modern infrastructure. Taylor is passionate about making AI more accessible, trustworthy, and grounded in real-world use cases. 

Taylor Smith

Senior AI Advocate
Red Hat

Taylor Smith is a Senior AI Advocate at Red Hat, where she champions open source innovation and the responsible adoption of AI at scale. With a background in software development, Kubernetes, Linux, and technical partnerships, she focuses on helping organizations build and operationalize AI using modern infrastructure. Taylor is passionate about making AI more accessible, trustworthy, and grounded in real-world use cases. 

Memory
Generative AI

Author:

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Author:

Mark Kuemerle

VP, Technology, Custom Cloud Solutions
Marvell

Mark Kuemerle is Vice President, Technology, Custom Cloud Solutions at Marvell. In this role, Mark is responsible for defining leading-edge ASIC offerings, drives product competitiveness, and architects system-level solutions. Before joining Marvell, Mark was a Fellow in Integrated Systems Architecture at GLOBALFOUNDRIES and has held multiple engineering positions at IBM.   

He has authored numerous articles on die-to-die connectivity and multichip systems and holds several patents related to low-power technologies and package integration. Mark earned a Master of Science and a Bachelor of Science degree in Computer Engineering from Case Western Reserve University. 

 

Mark Kuemerle

VP, Technology, Custom Cloud Solutions
Marvell

Mark Kuemerle is Vice President, Technology, Custom Cloud Solutions at Marvell. In this role, Mark is responsible for defining leading-edge ASIC offerings, drives product competitiveness, and architects system-level solutions. Before joining Marvell, Mark was a Fellow in Integrated Systems Architecture at GLOBALFOUNDRIES and has held multiple engineering positions at IBM.   

He has authored numerous articles on die-to-die connectivity and multichip systems and holds several patents related to low-power technologies and package integration. Mark earned a Master of Science and a Bachelor of Science degree in Computer Engineering from Case Western Reserve University. 

 

Autonomy
On-Device ML
Enterprise Use Case
Industrial Edge

Author:

Shreya Singhal

Applied Generative AI Research Scientist
Claritev

Shreya Singhal is an Applied Generative AI Research Scientist at Claritev, where she works on building and optimizing large-scale AI systems with a focus on LLMs, multimodal models, and AI agents. She holds a Master’s in Computer Science from the University of Texas at Austin and has prior experience across industry and research roles at organizations such as Dell Technologies, Charles Schwab, and IIIT Hyderabad. Her work spans retrieval-augmented generation, prompt engineering, and deploying production-grade AI pipelines, with a passion for advancing the infrastructure that powers generative AI.

Shreya Singhal

Applied Generative AI Research Scientist
Claritev

Shreya Singhal is an Applied Generative AI Research Scientist at Claritev, where she works on building and optimizing large-scale AI systems with a focus on LLMs, multimodal models, and AI agents. She holds a Master’s in Computer Science from the University of Texas at Austin and has prior experience across industry and research roles at organizations such as Dell Technologies, Charles Schwab, and IIIT Hyderabad. Her work spans retrieval-augmented generation, prompt engineering, and deploying production-grade AI pipelines, with a passion for advancing the infrastructure that powers generative AI.

Behind every AI product at WBD lies a shared foundation — a unified AI/ML architecture designed to handle everything from data ingestion to large-scale model deployment. In this session, we’ll take you under the hood of the platform that powers recommendations, personalization, content understanding, and more. You’ll learn how we designed for scalability, flexibility, and reliability across a diverse product portfolio, and the lessons we learned building an AI infrastructure that can serve billions of interactions a day.

Data Privacy & Governance
Enterprise Use Case

Author:

Nenad Mancevic

Principal Software Engineer
Warner Brothers Discovery

Nenad Mancevic

Principal Software Engineer
Warner Brothers Discovery