[ad_1]
We’re excited to announce the launch of Amazon DocumentDB (with MongoDB compatibility) integration with Amazon SageMaker Canvas, permitting Amazon DocumentDB clients to construct and use generative AI and machine studying (ML) options with out writing code. Amazon DocumentDB is a totally managed native JSON doc database that makes it easy and cost-effective to function important doc workloads at just about any scale with out managing infrastructure. Amazon SageMaker Canvas is a no-code ML workspace providing ready-to-use fashions, together with basis fashions, and the power to arrange information and construct and deploy customized fashions.
On this publish, we focus on methods to carry information saved in Amazon DocumentDB into SageMaker Canvas and use that information to construct ML fashions for predictive analytics. With out creating and sustaining information pipelines, it is possible for you to to energy ML fashions along with your unstructured information saved in Amazon DocumentDB.
Answer overview
Let’s assume the position of a enterprise analyst for a meals supply firm. Your cellular app shops details about eating places in Amazon DocumentDB due to its scalability and versatile schema capabilities. You need to collect insights on this information and construct an ML mannequin to foretell how new eating places will probably be rated, however discover it difficult to carry out analytics on unstructured information. You encounter bottlenecks as a result of it’s essential depend on information engineering and information science groups to perform these objectives.
This new integration solves these issues by making it easy to carry Amazon DocumentDB information into SageMaker Canvas and instantly begin getting ready and analyzing information for ML. Moreover, SageMaker Canvas removes the dependency on ML experience to construct high-quality fashions and generate predictions.
We reveal methods to use Amazon DocumentDB information to construct ML fashions in SageMaker Canvas within the following steps:
Create an Amazon DocumentDB connector in SageMaker Canvas.
Analyze information utilizing generative AI.
Put together information for machine studying.
Construct a mannequin and generate predictions.
Stipulations
To implement this resolution, full the next conditions:
Have AWS Cloud admin entry with an AWS Id and Entry Administration (IAM) person with permissions required to finish the mixing.
Full the atmosphere setup utilizing AWS CloudFormation by means of both of the next choices:
Deploy a CloudFormation template into a brand new VPC – This feature builds a brand new AWS atmosphere that consists of the VPC, non-public subnets, safety teams, IAM execution roles, Amazon Cloud9, required VPC endpoints, and SageMaker area. It then deploys Amazon DocumentDB into this new VPC. Obtain the template or fast launch the CloudFormation stack by selecting Launch Stack:
Deploy a CloudFormation template into an present VPC – This feature creates the required VPC endpoints, IAM execution roles, and SageMaker area in an present VPC with non-public subnets. Obtain the template or fast launch the CloudFormation stack by selecting Launch Stack:
Be aware that if you happen to’re creating a brand new SageMaker area, you should configure the area to be in a personal VPC with out web entry to have the ability to add the connector to Amazon DocumentDB. To study extra, consult with Configure Amazon SageMaker Canvas in a VPC with out web entry.
Observe the tutorial to load pattern restaurant information into Amazon DocumentDB.
Add entry to Amazon Bedrock and the Anthropic Claude mannequin inside it. For extra info, see Add mannequin entry.
Create an Amazon DocumentDB connector in SageMaker Canvas
After you create your SageMaker area, full the next steps:
On the Amazon DocumentDB console, select No-code machine studying within the navigation pane.
Beneath Select a site and profile¸ select your SageMaker area and person profile.
Select Launch Canvas to launch SageMaker Canvas in a brand new tab.
When SageMaker Canvas finishes loading, you’ll land on the Information flows tab.
Select Create to create a brand new information circulation.
Enter a reputation on your information circulation and select Create.
Add a brand new Amazon DocumentDB connection by selecting Import information, then select Tabular for Dataset kind.
On the Import information web page, for Information Supply, select DocumentDB and Add Connection.
Enter a connection title similar to demo and select your required Amazon DocumentDB cluster.
Be aware that SageMaker Canvas will prepopulate the drop-down menu with clusters in the identical VPC as your SageMaker area.
Enter a person title, password, and database title.
Lastly, choose your learn desire.
To guard the efficiency of main cases, SageMaker Canvas defaults to Secondary, that means that it’ll solely learn from secondary cases. When learn desire is Secondary most well-liked, SageMaker Canvas reads from out there secondary cases, however will learn from the first occasion if a secondary occasion is just not out there. For extra info on methods to configure an Amazon DocumentDB connection, see the Hook up with a database saved in AWS.
Select Add connection.
If the connection is profitable, you will note collections in your Amazon DocumentDB database proven as tables.
Drag your desk of option to the clean canvas. For this publish, we add our restaurant information.
The primary 100 rows are displayed as a preview.
To begin analyzing and getting ready your information, select Import information.
Enter a dataset title and select Import information.
Analyze information utilizing generative AI
Subsequent, we need to get some insights on our information and search for patterns. SageMaker Canvas offers a pure language interface to investigate and put together information. When the Information tab hundreds, you can begin chatting along with your information with the next steps:
Select Chat for information prep.
Collect insights about your information by asking questions just like the samples proven within the following screenshots.
To study extra about methods to use pure language to discover and put together information, consult with Use pure language to discover and put together information with a brand new functionality of Amazon SageMaker Canvas.
Let’s get a deeper sense of our information high quality through the use of the SageMaker Canvas Information High quality and Insights Report, which robotically evaluates information high quality and detects abnormalities.
On the Analyses tab, select Information High quality and Insights Report.
Select ranking because the goal column and Regression as the issue kind, then select Create.
It will simulate mannequin coaching and supply insights on how we are able to enhance our information for machine studying. The whole report is generated in a couple of minutes.
Our report exhibits that 2.47% of rows in our goal have lacking values—we’ll tackle that within the subsequent step. Moreover, the evaluation exhibits that the tackle line 2, title, and type_of_food options have probably the most prediction energy in our information. This means fundamental restaurant info like location and delicacies could have an outsized affect on rankings.
Put together information for machine studying
SageMaker Canvas presents over 300 built-in transformations to arrange your imported information. For extra info on transformation options of SageMaker Canvas, consult with Put together information with superior transformations. Let’s add some transformations to get our information prepared for coaching an ML mannequin.
Navigate again to the Information circulation web page by selecting the title of your information circulation on the high of the web page.
Select the plus signal subsequent to Information varieties and select Add rework.
Select Add step.
Let’s rename the tackle line 2 column to cities.
Select Handle columns.
Select Rename column for Rework.
Select tackle line 2 for Enter column, enter cities for New title, and select Add.
Moreover, lets drop some pointless columns.
Add a brand new rework.
For Rework, select Drop column.
For Columns to drop, select URL and restaurant_id.
Select Add.[
Our rating feature column has some missing values, so let’s fill in those rows with the average value of this column.
Add a new transform.
For Transform, choose Impute.
For Column type, choose Numeric.
For Input columns, choose the rating column.
For Imputing strategy, choose Mean.
For Output column, enter rating_avg_filled.
Choose Add.
We can drop the rating column because we have a new column with filled values.
Because type_of_food is categorical in nature, we’ll want to numerically encode it. Let’s encode this feature using the one-hot encoding technique.
Add a new transform.
For Transform, choose One-hot encode.
For Input columns, choose type_of_food.
For Invalid handling strategy¸ choose Keep.
For Output style¸ choose Columns.
For Output column, enter encoded.
Choose Add.
Build a model and generate predictions
Now that we have transformed our data, let’s train a numeric ML model to predict the ratings for restaurants.
Choose Create model.
For Dataset name, enter a name for the dataset export.
Choose Export and wait for the transformed data to be exported.
Choose the Create model link at the bottom left corner of the page.
You can also select the dataset from the Data Wrangler feature on the left of the page.
Enter a model name.
Choose Predictive analysis, then choose Create.
Choose rating_avg_filled as the target column.
SageMaker Canvas automatically selects a suitable model type.
Choose Preview model to ensure there are no data quality issues.
Choose Quick build to build the model.
The model creation will take approximately 2–15 minutes to complete.
You can view the model status after the model finishes training. Our model has an RSME of 0.422, which means the model often predicts the rating of a restaurant within +/- 0.422 of the actual value, a solid approximation for the rating scale of 1–6.
Finally, you can generate sample predictions by navigating to the Predict tab.
Clean up
To avoid incurring future charges, delete the resources you created while following this post. SageMaker Canvas bills you for the duration of the session, and we recommend logging out of SageMaker Canvas when you’re not using it. Refer to Logging out of Amazon SageMaker Canvas for more details.
Conclusion
In this post, we discussed how you can use SageMaker Canvas for generative AI and ML with data stored in Amazon DocumentDB. In our example, we showed how an analyst can quickly build a high-quality ML model using a sample restaurant dataset.
We showed the steps to implement the solution, from importing data from Amazon DocumentDB to building an ML model in SageMaker Canvas. The entire process was completed through a visual interface without writing a single line of code.
To start your low-code/no-code ML journey, refer to Amazon SageMaker Canvas.
About the authors
Adeleke Coker is a Global Solutions Architect with AWS. He works with customers globally to provide guidance and technical assistance in deploying production workloads at scale on AWS. In his spare time, he enjoys learning, reading, gaming and watching sport events.
Gururaj S Bayari is a Senior DocumentDB Specialist Solutions Architect at AWS. He enjoys helping customers adopt Amazon’s purpose-built databases. He helps customers design, evaluate, and optimize their internet scale and high performance workloads powered by NoSQL and/or Relational databases.
Tim Pusateri is a Senior Product Manager at AWS where he works on Amazon SageMaker Canvas. His goal is to help customers quickly derive value from AI/ML. Outside of work, he loves to be outdoors, play guitar, see live music, and spend time with family and friends.
Pratik Das is a Product Manager at AWS. He enjoys working with customers looking to build resilient workloads and strong data foundations in the cloud. He brings expertise working with enterprises on modernization, analytical and data transformation initiatives.
Varma Gottumukkala is a Senior Database Specialist Solutions Architect at AWS based out of Dallas Fort Worth. Varma works with the customers on their database strategy and architect their workloads using AWS purpose built databases. Before joining AWS, he worked extensively with relational databases, NOSQL databases and multiple programming languages for the last 22 years.
[ad_2]
Source link