Posts

Showing posts with the label machine learning

Train your data model using Azure AI model and predict results using C#

Image
  Hi friends, would like to talk today about the immense capabilities of Azure AI models, over the years how has it evolved and how could you leverage the same with C#, without writing much of a code (and of course very little understanding of AI). Before we begin, let us consider some basics: What is meant by REGRESSION in AI? Regression is a machine learning algorithm technique that uses input features to predict continuous numerical outputs. It's a type of supervised learning, meaning it's trained on a labeled dataset with known output values. Let me give you an example: Suppose I have data about your company, as to how has it been performing w.r.t. the profit it has been making over years.  Here is the data:  When plotted it gives a scatter like this: From here we can trace out a line that can cover the most of the points, and based on that we can easily find out the profit value, in another 10 years down the line. Such a plot is typically marked with the equation: y=...

Using Python to plot D365FO data and visualize the trend, by using dataframes

Image
  I am sure you must have read my earlier post: https://subsd365.blogspot.com/2024/03/using-python-to-fetch-data-from-d365.html, which outlines a way of reading the data from D365FO, using Python. This post will help you just not fetching the data from F&O, but also visualise the data plot against x and y axes and then understanding the trend of your business (eg: you can see which Item is a high demand item, which customer has made the maximum transactions across a given time frame -- even when could a customer potentially could transact again: examples galore). And uh, ok -- before you proceed you have read the above post first and make sure: a. You have installed Jupyter notebook on your machine (it's free and absolutely easy to install and operate). b. You have a valid Microsoft Entra Id configured and given API permission to D365F&O. c. You have listed the client Id in your D365F&O side Entra Id registration. Before we begin, let us  quickly  check: ...

Using Python to fetch data from D365 Finance & operations

Image
  This short article can help you fetch data from D365FinOps, which a. You can subsequently analyse b. You can perfrom further predictive modelling based on this data  c. Understanding and forcasting the direction in which business is going and so on. And it's quite straight forward. Step 1. Define your Azure app registrations (AKA Microsoft Entra-ID). Step 2. Enasure that the Azure App is registered in D365FO end by enabling the same through System Admin \\Setup\\ Microsoft Entra ID Applications   Step 3. We are going to use Jupyter Notebook here, for our code. This could be easily launched, if you have installed Anaconda as your IDE and then simply going to Windows \\ Jupyter. This will prompt you with an intermittent DOS prompt like this: And will eventually open the Jupyter browser which looks lot like this: Step 4: We are going to use the following modules for our code: import requests import json   The former handles the requests and responses from API calls ...

An example of predictive modelling using Decision trees, Python - step by step

Image
.  The aim is to how accurately your system can predict the future. All around the globe, we are seeing organizations and enterprises are preparing applications that can read, and understand the psyche of the buyers and thereby predicting what they could come up to buy in another 3-4 months. This is exactly how you see 'Customers who have brought this, have also brought that' on Amazon or 'People similar to your profile' on LinkedIn. The idea is to feed your system with more and more data. As you pour in more data to train your machine, easier it gets for the machine to understand the pattern. And more accurate is the result. A typical machine learning process consists of the following steps: Import the data: This is where we feed the data into the system. Clean the data: the most essential part of the process that demands to clean the data beforehand to get rid of duplicate data, repetitive data, data with null values, etc. Each project has a different modus operandi ...