This topic provides workflow examples for common tasks to help you get the most out of Cortex Code CLI. It covers
data discovery, synthetic data generation, building dashboards, and creating Cortex Agents.
Here are some examples of generating synthetic data for different use cases.
Fraud analysis for a fintech company:
> Generate realistic looking synthetic datainto<databasename>.Create a tableof10000
financial transactionswhere~0.5%of them are fraudulent.Include Amount,Location,
Merchant,andTime. Make the fraudulent ones look suspicious based onlocationor amount.
Pharma trial data:
> Make a dummy datasetfor a clinical trial of a new blood pressure medication.List100
patients, their age, their dosage group(Placebo vs. 10mg),and their blood pressure
readings over4 weeks.
Customer churn data:
>Create a customer churn datasetfor a telecom company showing customer usagefor100000
customers.Include basic demographic data such as fake names, phone numbers, US city and
state. Also includedatausage(GB),callminutes, contract length,and whether they
cancelled their service(churn). Ensure there's a customer_id column that's unique.Create the data locally andthen upload it toSnowflake.
> Calculate the Churn Rate grouped by state and contract length.Order the results by the
highest churn rate first so I can see the most risky regionsand contract types.
> I want to identify the heaviest datausers who are also churning.
Create and deploy Streamlit apps with charts, filters, and interactivity.
Tip
Open an example dashboard you like (or find one online) and copy it to your clipboard.
You can paste images directly into Cortex Code (Ctrl+V) as design references.
>Build an interactiveStreamlit dashboard on this datawith state filters anduse the
conversation so far for examples of the kinds of charts toshow.Use the attached imageas a templatefor visuals and branding.
Once you’ve verified that the dashboard is working and looks good, upload it to Snowflake:
> Ensure that the Streamlit app will workwithSnowflakeand upload it toSnowflake.
Give me a link toaccess the dashboard when it's done.
Congratulations! You should now have a working Streamlit dashboard that displays the dataset you created.
This section walks through creating a Cortex Agent to answer questions about your data in Snowflake Intelligence.
We’ll augment the existing synthetic data with customer call transcripts.
First, generate synthetic data containing customer service calls:
> Generate a new table called customer_call_logs. Generate 50 realistic customer service
transcripts (2-3 sentences each)as PDF files.Some should be angry complaints about
coverage, others should be questions about billing.Thenuse the AI_PARSE_DOCUMENTfunctiontoextract the textand layout informationfrom the PDFs into the TRANSCRIPT_TEXT
column.Splittextinto chunks for better search quality.
Then create a Cortex Search service that indexes the transcripts:
>Create a CortexSearchService named CALL_LOGS_SEARCH that indexes these transcripts.
It should index the TRANSCRIPT_TEXT columnandfilterby CUSTOMER_ID.
Build a Cortex Agent that uses both the Analyst and Search services:
>Build a CortexAgent that has accessto two tools:- cortex_analyst: For querying the TELECOM_CUSTOMERS SQLtable.- cortex_search: For searching the CALL_LOGS_SEARCH service.Write a systempromptfor this agent:- Persona: You are a Senior Retention Specialist.- Routing Logic: If the user asks for'metrics','counts',or'averages',use the
Analyst tool.If the user asks for'sentiment','reasons',or'summaries of calls',use the Search tool.-OutputFormat: Always verify the customer ID before answering.If the risk score ishigh,end the response with a recommended retention offer (e.g.,'Offer 10% discount').-Constraint: Never reveal the raw CHURN_RISK_SCORE to the user; interpret it as'Low','Medium',or'High'.