Tech Talk: How In-Memory Computing Enables a New Generation of Microservices
This video discusses why today’s business solutions need a next-generation microservices architecture.
The first generation of microservices was envisioned as stateless request-response endpoints. But it's now clear that microservices must often maintain some state. For example, microservices tasked with running machine learning models or engaged in statistical classification must maintain the state of their models and their parameter weights. This brings us to one of the biggest challenges—where is that state stored? Options like RDBMSs are too slow, do not scale, and have inflexible schema models. Distributed in-memory caching, however, is the only widely adopted enterprise technology that offers high speed, scalability, and dynamic schema evolution. In this talk, I will discuss: - Why today’s business solutions need a next-generation microservices architecture. - Why microservices need to leverage in-memory computing technologies. - How you can get started with next-generation microservices. Presenter Bio: Lucas is a senior solutions architect at Hazelcast, where he helps Hazelcast's most demanding customers architect, design, and operationalize enterprise software systems based around Hazelcast IMDG and Jet. Before joining Hazelcast, Lucas held similar positions at GigaSpaces and GridGain, giving him a uniquely broad and deep understanding of the in-memory platform space. Lucas holds a B.S.E. in computer science from the University of Michigan.