Search
Close this search box.
AWS reInvent 2 021

AWS re:Invent 2021: New Developments Lead to New Opportunities

Author: Jason Halpern, Christopher R. Barbour, Ph.D.

 

AWS re:Invent returned to Las Vegas in person for its annual conference this past year. In addition to the return of large crowds, long-distance power walks and social encounters in the Vegas nightlife, it also brought announcements and discussions on both service announcements and use-cases being solved on the world’s largest Infrastructure as a Service (IaaS) platform.

As Atrium partners with AWS to focus on Machine Learning and Analytics solutions for our customers, we centered our efforts around the services and products that help enhance value and drive operational efficiency for our customers. With this in our scope, we identified four main takeaways from our stay at re:Invent:

  • Continued development of MLOps and predictive modeling tools
  • Datalakes/meshes as a data model
  • Infrastructure as code
  • Recommender systems using AWS Personalize

Continued Development of MLOps and Predictive Modeling Tools

MLOps, or the process of deploying and maintaining predictive models in a production environment, has been a hot topic at both Atrium and AWS in recent years. Best practices and dedicated tools have been developed to reduce the time required for data scientists and machine learning engineers to develop, validate, deploy, and monitor predictive models reliability and securely. The SageMaker suite of products provides the necessary tools to achieve these goals in a single, cohesive system, which is becoming one of the fastest growing AWS services to date. A sample of these capabilities include:

  • SageMaker Feature Store can be used to store and access predictive features to share across different models and teams, increasing the consistency of these model inputs and improving data quality.
  • SageMaker Model Registry allows you to catalog and version trained models with specific training metrics and metadata in preparation for deployment.
  • SageMaker Model Monitor provides the ability to easily and continuously monitor predictive models for drift and degradation in performance and accuracy.
  • SageMaker Pipelines can be used to automate and manage the full end-to-end lifecycle of these models using modern CI/CD standards.

At re:Invent 2021, new features were announced to augment the capabilities available in this domain with a suite of new SageMaker features. Three of these features that stood out to our team include SageMaker Serverless Inference, SageMaker Inference Recommender, and SageMaker Canvas.

  • SageMaker Serverless Inference is a preview feature which allows for deployment of predictive models without managing the underlying instance. It includes features such as built-in fault tolerance and automatic scaling. We see this as a viable option for proof-of-concept models, models with unpredictable usage patterns, or as an option for a quick model deployment where usage benchmarks can be established.
  • SageMaker Inference Recommender is a new service that allows users to test/optimize the appropriate endpoint for a predictive model, allowing data engineers to decrease the needed time to safely get these models into production environments. This includes making recommendations on the appropriate instance type and size, performing load tests for production requirements, and outputting endpoints meeting these criteria.
  • SageMaker Canvas is a no-code modeling platform for domain experts to create training data and create predictive models, without the need for a background in programming. These models can then be sent to SageMaker studio for review by a dedicated data scientist prior to deployment. This product will allow for a larger portion of the workforce to uncover insights available in data, something we are passionate about at Atrium.

SageMaker services, both old and new capabilities, can be used to efficiently and securely manage each step in the machine learning lifecycle and provide these capabilities to individuals of all levels within the organization.

Data Lakes/Meshes as a Data Model

We have moved into an age where vast amounts of data, both structured and unstructured, are being used to drive business decisions and uncover opportunities for improvement. As organizations are embracing this cultural shift, traditional databases and data warehouses are being consolidated into domain-oriented data meshes to increase their data agility.

A data mesh (defined by Zhamak Dehghani, a ThoughtWorks consultant) is a data architecture where data is treated as a product and owned by cross functional teams that can cut across the organization. This architecture allows for individuals with the understanding of that data to derive the appropriate view and structure based on their domain. These data products, often contained within individual domain-specific data lakes, are consumed by other units across the organization in a self-service fashion with a centralized governance structure.

However, centralized governance and management of these solutions can be difficult in large enterprises, and traditionally involve manually cataloging and registering available data sources, setting security and access settings, etc.

To address these, AWS has developed LakeFormation, a service that allows organizations to easily setup secure data lakes efficiently with a centralized data catalog explaining the available data. During re:invent 2021, we noticed a large number of enterprises of various sizes presented on their journeys setting up a modern datamesh for their analytics data source. And while the implementation and ingestion details varied from company to company, the use of LakeFormation was present in all architectures that were presented.

What is Infrastructure as Code?

Infrastructure as Code (IaC) is one of the most visible outcomes of the rise of the DevOps mindset – the idea that you can configure and manage infrastructure (such as networks, hardware, virtual resources, and topologies) through declarative or scripted definitions (code)  rather than through manual configuration.  This approach allows you to apply the same kind of version control and repeatability to building out infrastructure as developers use for source code.  An IaC approach creates the opportunity to include your infrastructure as a part of your CI/CD processes by creating the same environment every time a deployment occurs.

Why is Infrastructure as Code important?

With cloud computing, the typical number of infrastructure components in a deployment has grown, and more applications are being released to production on a daily basis.  Because of this, infrastructure needs to be able to be spun up, scaled, and taken down frequently. Without an IaC practice in place, it becomes increasingly difficult to manage the scale of today’s infrastructure

Additionally, since configuration is decoupled from the system, it can more readily be deployed on a similar system elsewhere. In this way, it reduces the challenges of migrating from a data center to a cloud or from one cloud to another.  IaC also supports agile development and CI/CD strategies by ensuring that sandbox, test, and production environments will be identical and remain consistent over time because they’re all configured with the same declarative code or scripts.

Benefits:

  • Increase in speed of deployments by integrating into CI/CD pipelines
  • Improve infrastructure consistency, which can help to reduce errors
  • Eliminate configuration drift
  • Provide the ability to quickly adapt infrastructure
  • Reduction in cost by elimination of unused components across environments in a consistent manner

How does Infrastructure as Code work?

There are two ways that IaC is commonly defined:  imperative approaches, which specify the instructions, but don’t define the outcome, and declarative approaches, which specify the desired configuration outcome without detailing how to get there. The latter is generally preferred.

A declarative approach may define the specific version and configuration of a server component as a requirement, but does not specify the process for installing and configuring it. This abstraction allows for greater flexibility in the middle, such as optimized techniques the infrastructure provider may employ. It also helps reduce the technical debt of maintaining imperative code, such as deployment scripts, that can accrue over time.

How does AWS handle Infrastructure as Code?

AWS provides two primary means for defining Infrastructure as Code.  The first is via AWS CloudFormation templates as a declarative approach, and the other is via the AWS Cloud Development Kit (CDK) as a more imperative approach.

CloudFormation allows for the creation of AWS resources using text files using JavaScript Object Notation (JSON) or YAML Ain’t Markup Language (YAML) format. The templates require a specific syntax and structure that depends on the types of resources being created and managed. You author your resources in JSON or YAML and then check it into a version control system, and then CloudFormation builds the specified services in a safe, repeatable manner.

The AWS Cloud Development Kit (CDK) is an open source development framework for provisioning resources using familiar programming languages, such as Python, Java, or .NET. This allows developers to build their applications and define their infrastructure all in the same language or IDE, simplifying their experience. The CDK then uses CloudFormation in the background to orchestrate building out infrastructure on behalf of the developer.

During re:Invent 2021, Amazon announced the availability of CDK v2, which is a single package for all supported languages, making it easier to use the CDK and stay up to date with new versions or features.  AWS CDK v2 consolidates the AWS Construct Library into a single package, eliminating the need to download individual packages for each AWS service used.

Recommender Systems Using AWS Personalize

Given the amount of digital commerce that consumers and businesses rely upon, content personalization is quickly becoming the standard instead of the exception. However, the sophistication of these models and the level of data science specialization required to build these solutions can lead to low speed-to-production and reduced ROI for organizations. To combat this, AWS Personalize was launched in 2018 and was developed based on the best practices developed by Amazon over the past 2 decades.

This product allows for fast, easy-to-integrate recommender systems using your customer data in a no-code environment. It includes features such as user specific recommendations, item level similarity calculations, and business rule filtering. Since 2018, new features have been added based on customer feedback, including the use of Natural Language Processing (NLP) to uncover meaningful signals from unstructured text data, and use of user/item metadata to enhance similar items recommendations. During re:invent 2021, new enhancements to AWS Personalize were announced, including intelligent user segmentation and use-case optimized recommenders for media and entertainment as well as retail.

Product recommendations are a prominent use-case for our customers, and the capabilities of AWS Personalize allow for rapid development and deployment of these solutions across a wide range of industries.

The tools and services provided by AWS, showcased at re:Invent 2021, can drive organizational change and operational efficiencies at enterprises of all levels.

Learn more about Atrium’s analytics and AI expertise and the services we offer.