DEV Community

Dan for Leading EDJE

Posted on • Edited on

An Enterprise AI Platform

Today, Artificial Intelligence (AI) is more focused on performing a single task very smartly rather than providing a comprehensive solution covering many areas that require intelligence.

AI has been around for decades, so why the big push for AI today? Three main factors have created an explosion in the AI industry.

Big Data. There is more data than ever, which allows machine learning to improve and provide more relevant insights.
Reduced processing costs. The cost to set up infrastructure and build a specialized team was extremely high. AI required huge budgets and investment for custom made-to-order algorithms. The rise of ubiquitous computing, low-cost cloud services, inexpensive storage, and new algorithms changes all that. Cloud computing and advances in Graphical Processing Units (GPUs) have provided the necessary computational power, while AI algorithms and architectures have progressed rapidly, often enabled by open source software. In fact, today there are many open source options and cloud solutions from Google, Amazon, IBM and more to address infrastructure costs.
The third reason AI is surging today has to do with breakthroughs in deep learning technology. A subset of machine learning, deep learning has structures loosely inspired by the neural connections in the human brain. Most of these big deep learning technology breakthroughs happened after 2010, but deep learning (neural nets) have already demonstrated the ability to solve highly complex problems that are well beyond the capabilities of a human programmer using if-then statements and decision trees.
So how can an organization adopt AI. Here are some best practices around adopting Artificial Intelligence into your organization.

Setting Executive Expectations About AI Adoption: AI is Not Like Regular Software
Business pioneers ought to comprehend that they have to generally change the manner in which they consider these tasks contrasted with conventional software automation. . Implementing a software automation venture that doesn't include any AI is as a rule far simpler and takes significantly less time.

AI activities are time-and asset concentrated. They for the most part require immense measures of very specific sorts of information that the business might or might not have.

They require time from both data scientists and the subject-matter experts advise their work; a business should approve of taking subject-matter experts away from the routine ways in which they bring in cash for the business to team up with information researchers to construct an AI model. In spite of this speculation, the ROI of some random AI undertaking may be insignificant for the time being, if there is one by any means.

Sector-Specific AI Understanding
Businesses also have to be compelled to conceptually understand what sorts of issues AI will solve. There are many approaches to computer science and machine learning, as well as natural language process, computer vision, and anomaly detection. All of those have their own specific use-cases in business.

Once a business leader understands what’s doable with AI, they'll establish business issues at their organization that AI may solve. There are plenty of AI applications within the market. It's vital to settle on the acceptable applications that arrange to solve business issues with comfortable technology and supply worth in measurable ways. Finally, businesses should have adequate and relevant knowledge for coaching machine learning algorithms for correct outputs.

Clarity on Goals
Setting a long-term AI objective is critical for success. Simultaneously, business leaders need to comprehend that any "reconsidering" of business work processes through AI is a huge undertaking with changing paces of progress.

Organizations need to concentrate on picking up AI abilities first and afterward use them as an indicator for characterizing long haul objectives. If the objective is upskilling the group's comprehension of AI, organizations should begin with little undertakings and pick a zone where a traditional software solution exists. Pick something where you as of now have a sensible existing model for with the goal that you can at any rate the effects of the AI model and know whether you are going the correct way.

While accomplishing these drawn out objectives isn't a short-term process, the best approach to move toward them is to begin with little AI extends that are lined up with the sort of long haul AI abilities that a business needs to pick up.

Ways to Organize AI-Compatible Teams
When organizations have comprehended what AI can do and have aligned those abilities to their business objectives, the following stage is to amass information researchers and subject-matter experts to make multidisciplinary groups. SMEs are workers with a profound comprehension of business forms in a specific capacity or division.

As a rule, gathering such a group is more costly than customary programming ventures. The spending allotment for such tasks for the most part should be affirmed by the COO or somebody in the C-suite. Arranging such a group of information researchers may include the accompanying advances:

Guaranteeing that the information researchers taking a shot at the arrangement are obviously mindful of the business issue AI is being applied to. This will help give them context on how much and what kind of information they need, just as what other colleagues' aptitudes may be required for the undertaking.

Subject matter experts need to recognize the business issues that should be explained. At that point, information researchers may be more qualified to determine if AI can take care of that specific business issue.

Artificial intelligence ventures are not a one-time investment. At the point when organizations produce new information, the calculations need to be adjusted so as to consolidate the extra information and still keep up precise outcomes. Maintaining and refreshing the AI frameworks is necessary, and business leaders need to gather groups that can achieve this undertaking in any event, when the task is to a great extent created and sent. Once more, this procedure isn't one that includes justdata scientists. Much the same as with the advancement of AI frameworks, keeping up and aligning these frameworks to improve their exactness likewise requires contributions from topic specialists and other colleagues.

Data and Data Infrastructure Considerations
While we have spoken pretty much all the fundamental human components needed to effectively embrace AI, none of these steps are beneficial unless they are built round a data-centered strategy. Data is the thing that makes AI ventures run, and this information should be cleaned, parsed, and tried before it tends to be utilized.

Rethinking how a business is gathering, putting away, and overseeing information is a choice that should be made in the wake of increasing a specific degree of information competency.

When the data being tested in a pilot is quantifiably significant as a proof of concept, organizations can consider the stage where the whole information foundation is overhauled. The following are a couple of pointers on what business pioneers can expect with regards to information and information framework the executives in AI ventures:

Organizations will find that getting to information is generally harder than foreseen. Information may be put away in a few unique arrangements or may be put away in various geological areas that have various information move guidelines.

Even the information that is available is generally not in a configuration that makes it simple to utilize. The information will regularly require overwhelming redesigning and arranging so as to format it, and, now and again, purging it.

The storage equipment for this information may likewise require overhauling. What's more, organizations may likewise need to reconsider how they are gathering information at present and what new foundation may be required to actualize AI economically

Picking an Initial AI Project
Start Small, But With A Long-Term View of AI Skills
AI projects also involve many individual steps that may take days or weeks to complete. In essence, there are many benefits to beginning small:

Small projects will facilitate businesses target building skills instead of trying to find outright returns right away. AI projects are technically difficult and need giant amounts of initial capital to deploy. They will need 2 to 6 months to make, and even then there may not be a successful result in some cases. Beginning with little pilots permits businesses to understand that AI skills that appear to be working and which aren’t valuable.

Small projects may not need total information infrastructure overhauls in order to successfully test and deploy. For example, deploying a chatbot may not need a business to overhaul their entire information infrastructure, and nonetheless it'll provide them a degree of entry into AI. Businesses will keep small AI projects in a control system in terms of internal information flows thereby not disrupting existing processes.

Small projects will facilitate build confidence in data management capabilities.
Gaining confidence in operating with data can facilitate continue the development of future AI projects in the future, and gaining data ability as a critical capability can allow businesses to remain ahead of the competition.

Basic Tips for Artificial Intelligence
Understand your data and business
The above statement may sound like common sense, but it’s worth mentioning anyway, as skipping those steps may have critical consequences for the project.
Exploratory data analysis helps to see the information quality and define reasonable expectations towards the project’s goals. Moreover, close cooperation with Subject Matter Experts provides the domain’s insights, which are the key to get an entire understanding of the matter.
The above should lead to metrics that help in tracking project development not only from a machine learning perspective but also from business factors.

Stand on the shoulders of giants
It’s highly likely that somebody has already faced an issue just like yours and located facilitate your solution.
Literature review, blogs, and evaluating available open-source codes can facilitate you to see the initial direction and shortlist possible solutions that may support assembling the merchandise.

Don’t believe everything stated within the papers
On the other hand, many papers are written to prove specific model superiority over alternative concepts and don’t address the restrictions and downsides of a given method. Therefore, it’s a decent practice to approach each article with a dose of skepticism and customary sense.

Start with an easy approach
Running a straightforward approach may offer you more insights regarding the matter than a more complicated one, as simple methods and their results are easier to interpret. Moreover, implementing, training, and evaluating a straightforward model is much less time consuming than a complicated one.

Model interpretability vs. flexibility. A more flexible model can handle harder tasks, but the results are harder to interpret. Deep Learning should be located far-off on the proper bottom corner of the above diagram

Define your baseline
How does one know that your state-of-the-art billion parameters model does better than a naive solution? As sophisticated methods do not always outperform more straightforward approaches, it's a decent practice to own a straightforward baseline that helps in tracking the gain offered by complex strategies. Sometimes the benefit is minimal, and a straightforward method may be preferable for a given task for reasons like inference speed or deployment costs.

Plan and track your experiments
Numerous different variables may influence the performance of AI algorithms. The statement is especially valid for deep learning models united can experiment with model architectures, cost functions, and hyper-parameters. Hence, tracking the trials becomes challenging, primarily if many folks work together.
The solution is solely a lab notebook. counting on the team size and your needs, it'd be an easy approach as a shared spreadsheet or a more sophisticated one as MLflow.

Don’t spend an excessive amount of time on finetuning
The results presented within the papers are often an impression of pushing the described methods to their limits. The additional gain of accuracy percentage fractions may be an impression of the many time-consuming experiments. Moreover, papers don't seem to be step-by-step implementation guides but instead target describing the essential concepts of the presented method, and therefore the authors don’t mention many nuances that may be important from the implementation perspective. Therefore implementing a paper from scratch may be a very challenging task, especially if you are trying to match the described accuracy.
An AI project is sometimes time-constrained and requires a wise approach to time management. Hence, if the project features a different goal than replicating some publication precisely, “close enough” results may be sufficient to prevent the implementation. This remark is crucial if several approaches require implementation and evaluation.

Make your experiments reproducible
It doesn’t bring much value to the project if you managed to attain 99% accuracy, but you're ineffective to breed this result. Therefore, you must guarantee that your experiments may be repeated.
First of all, use version control, not only to your code but also to your data. There are several tools for code versioning, but the information versioning is additionally gaining more and more attention, which ends up in solutions suitable for data science projects.
Machine learning frameworks are non-deterministic and depend upon pseudo-random numbers generators. Therefore one may obtain different results on different runs. To create things fully reproducible, store the seed you used to initialize your weights.

Maintain code quality
There is a quite common term “research code,” which is an excuse for poor quality code that's barely readable. The authors usually say, the main focus was to create and evaluate a brand new method instead of worrying about code quality. Which may be a good excuse, as long as nobody else is created to reuse such implementation, there's no need for changes or deployment to production. Unfortunately, all of these points are inherently a part of a commercial project. Therefore, as soon as you make your code available to others, refactor it and make it human pleasant.
Moreover, sometimes not only the code quality is poor, but also the project structure makes it hard to grasp. Also, during this case, you will like already existing tools that help in maintaining a transparent code organization.


Smart EDJE Image

Top comments (0)