Python is one of the most popular programming languages and programming AI in this language has many advantages. In this course, you'll learn about the differences between Python and other programming languages used for AI, Python's role in the industry, and cases where using Python can be beneficial. You'll also examine multiple Python tools, libraries, and use environments and recognize the direction in which this language is developing.
In this course, you'll learn about development of AI with Python, starting with simple projects and ending with comprehensive systems. You'll examine various Python environments and ways to set them up and begin coding, leaving you with everything you need to begin building your own AI solutions in Python.
Search algorithms provide solutions for many problems, but they aren't always the optimal solution. Discover how constraint satisfaction algorithms are better than search algorithms in some cases, and how to use them.
This 13-video course explores how artificial intelligence (AI) can be leveraged, how to plan an AI implementation from setup to architecture, and the issues surrounding incorporating it into an enterprise for machine learning. Learners will explore the three legs of AI: how it applies intelligence-like behavior to machines. You will then examine how machine learning adds to this intelligence-like behavior, and the next generation with deep learning. This course discusses strategies for implementation of AI, organizational challenges surrounding the adoption of AI, and the need for training of both personnel and machines. Next, learn the role of data and algorithms in AI implementation. Learners continue by examining several ways in which an organization can plan and develop AI capability; the elements organizations need to understand how to assess AI needs and tools; management challenges; and the impact on personnel. You will learn about pitfalls in using AI, and what to avoid. Finally, you will learn about data issues, data quality, training concepts, overfitting, and bias.
In this 12-video course, you will examine the different uses of data science tools and the overall platform, as well as the benefits and challenges of machine learning deployment. The first tutorial explores what automation is and how it is implemented. This is followed by a look at the tasks and processes best suited for automation. This leads learners into exploring automation design, including what Display Status is, and also the Human-Computer Collaboration automation design principle. Next, you will examine the Human Intervention automation design principle; automated testing in software design and development; and also the role of task runners in software design and development. Task runners are used to automate repeatable tasks in the build process. Delve into DevOps and automated deployment in software design, development, and deployment. Finally, you will examine process automation using robotics, and in the last tutorial in the course, recognize how modern robotics and AI designs are applied. The concluding exercise involves recognizing automation and robotics design application.
A cross-platform library, OpenCV facilitates image processing and analysis. In this course, you'll discover fundamental concepts related to computer vision and the basic operations which can be performed on images using OpenCV. You'll begin by outlining how to read images from your file system into your Python source in the form of arrays and then save an image array into a local file. Next, you'll explore color images represented as a combination of blue, green, and red channels, how to convert color images to grayscale, and how grayscale images are defined. Finally, you'll perform basic operations on images by investigating how to combine two images using an add operation and make one of the added images more prominent than the other using a weighted addition. Conversely, you'll also perform a subtract operation using two images.
In this course, participants will examine chatbot use cases, the technology stack, and popular development and deployment tools with Amazon's Alexa on Amazon Web Services (AWS) and Google's Dialogflow. First, you will learn about chatbots and in what categories they are used and the different classifications of chatbots. You will explore the different technologies orchestrated to create chatbots. Look at conversation flow and learn about the conversational flow of the typical chatbot/human interface. Then examine Dialogflow building blocks and the elemental building blocks for a typical chatbot built with AWS Alexa Skills Kit. Next, you will set up the AWS developer account required for Alexa Skills development and use the account and an AWS Lambda service to develop Alexa Skills. Then explore the components of the Alexa Development Console. Learn how to configure an AWS Lambda function. After setting up a developer account on Google's Dialogflow, you will look into the Dialogflow developer console and its components. In a closing exercise, you will practice what you learned about chatbots and their architecture.
In this course, participants explore the development of chatbots with one of the main chatbot development frameworks, Google's Dialogflow Developer Console. Start by creating an agent for a chatbot and exploring default intents in Dialogflow. Intents map what a user says to what the bot should do. You will then create custom intents in Dialogflow. Participants then examine the important differences between developer and system entities in Dialogflow. Next, you will generate developer entities to extract information from user conversations in Dialogflow. Learn how to generate training phases, which are user expressions that a user might say when they want to invoke an intent. You will then work with the actions and parameters associated with each intent. Learn how to write static responses, which a bot can respond to a user with in Dialogflow. Enable the Small Talk feature for a chatbot and test its functionality in Dialogflow. Then learn how to write inline cloud functions to satisfy a fulfillment in Dialogflow. A concluding exercise deals with creating a chatbox in Dialogflow.
In this course, explore the advanced concepts and features for developing and deploying chatbots, working with contexts, integrating with alternate platforms, and deploying fulfillments. Begin by looking at linear and nonlinear human/chatbot conversations. Next, work with input and output contexts. Contexts represent the current state of a user's request in a dialogue. Move on to follow-up intents, which allow you to easily shape a conversation without needing to create and manage contexts manually. Create the entry point for a nonlinear conversation by using contexts, then carry those contexts on a chatbot dialog to produce nonlinear conversations. Explore how to integrate Dialogflow chatbots with other platforms and deploy a fulfillment in Dialogflow. Access and use Actions on Google in Dialogflow and test a chatbot by using Google Assistant. Integrate Dialogflow chatbots with Google Assistant. Learn about Chatfuel building blocks, examining the use of prebuilt flows and text and typing elements, quick reply images and send blocks in Chatfuel. In the closing exercise, describe chatbot linear and nonlinear conversations and build a basic chatbot with Chatfuel.
In this course, participants examine the Amazon Web Services (AWS) Alexa Skills Kit, including the use of invocations, intents, utterances, and slots. Testing with Alexa Simulator and Echosim is also covered. Begin by creating a skill in Alexa Development Console and looking at the use of invocations with the Alexa skill. Then discover how built-in intents are used in Alexa Development Console. Next, create and use custom intents, utterances, and slots in Alexa Development Console. To review: an intent is a construct representing an action that fulfills a spoken request, utterances are related spoken phrases mapped to the intent, while slots are optional arguments also related to intent. You will learn how to build a Lambda function and integrate it with an Alexa skill, then test a skill by using Alexa Simulator and Echosim. You will configure a skill to use DynamoDB for persisting session data. Finally, create an Alexa skill that manages a multistage conversation. The concluding exercise directs you to create a skill by using the Skills Kit in the Alexa Development Console.
OpenAI offers an Application Programming Interface (API) that allows users to create, manipulate, and translate text using its available models and endpoints. Understanding how the API works, its limits, and how to effectively use best practices will help you get the most from the interface. In this course, you will explore OpenAI's API, generate an API key, and learn about the impact of social bias and blindness in models. Then, you will discover the ethical usage policy and safety and privacy concerns of OpenAI. Next, you will examine available models and endpoints. You will create a simple text completion, parse a response, troubleshoot common errors, and apply parameters to improve your results. Finally, you will use the language translation API to translate to and from English and identify organizational best practices when using OpenAI to handle scaling, latency, and limits.
Generative artificial intelligence (AI) has taken the tech and business world by storm. It currently can create stories, text, images, summaries, essays, and much more, with sometimes nothing more than a few words to describe what you want. Unfortunately, it can also be used in ways that can be harmful, such as creating deepfakes and false information. In this course, you will discover the differences between generative AI and general AI and look at the history and future of generative AI. You will explore applications of generative AI and the ethical, safety, security, and privacy concerns associated with its use. Then you will identify common generative AI application programming interfaces (APIs) and best practices when using generative AI. Next, you will find out how to create images and text with generative AI, and you will focus on the challenges of AI integration into processes and workflows. Finally, you will learn how to integrate generative AI APIs to create tools like chatbots.
Google Bard is a generative artificial intelligence (AI) that uses a large language model to facilitate answering questions and creating content for a wide range of topics. Understanding how the models work, its limitations, and what functionality the service provides enables anyone to optimize their usage of the service to accomplish a multitude of tasks. In this course, you will explore the Bard interface and learn to use Bard to answer questions and create content while also understanding Bard's limitations, features, and best practices. Additionally, you will explore the ethics, privacy, and security concerns that can come with using a generative AI like Bard.
Google Bard can be used to write creative content, but it also allows you to share that content, adjust content to reflect a tone, and translate text to and from English. These capabilities can be used by almost anyone in virtually any industry to expedite tasks. In this course, you will learn how to use Bard to create poems, stories, lyrics and other content. You will also learn to create summaries and outlines. Next, you will discover Bard's image object recognition and finding capabilities. Finally, you will be introduced to Bard's translation capabilities.
Google Bard is a useful tool for content creation, translation, and analysis; however, using the PaLM 2 API it is possible to integrate Bard directly into your own processes via the provided application programming interface (API) or the client libraries that are ready to go. This does require some programming and command line interface (CLI) experience but even a small amount should be sufficient to follow along. In this course you will learn about Bard's analytical capabilities, the PaLM 2 API, and how to use the API to accomplish tasks programmatically rather than through the Bard web interface. Additionally, you will explore the PaLM models, support languages and libraries, and the interfaces used for communicating with PaLM.
Python and Google Bard can be combined to create applications and programs via the PaLM 2 API. These programs can solve problems or integrate Bard into workflows or processes. In this course, you will learn to solve code problems with Bard and how to use the Python Client API library to connect and use PaLM to create applications that integrate Bard. In particular, you will explore how to programmatically check content for appropriate communications, adjust parameters to fine-tune responses, troubleshoot common problems, add security to a process, and create a simple chatbot.
Generative artificial intelligence (AI) can create new content, such as text, images, and music. It is powered by machine learning (ML) models that have been trained on massive datasets of existing content. Prompt engineering is the process of designing and crafting prompts that guide generative AI models to produce the desired output. You will start this course by learning how you can leverage prompt engineering to improve your day-to-day and work-related tasks. Next, you will explore how to use generative AI chatbots such as ChatGPT, Google Bard, and Microsoft Bing Chat. You will create an account with OpenAI and explore ChatGPT's interface, before diving into natural-language conversation. You will then explore Google Bard and Bing Chat for conversational AI. Finally, you will work with Perplexity AI.
The OpenAI Playground is a web-based tool that lets you experiment with large language models (LLMs) to generate text, translate languages, write creative content, and answer your questions in an informative way. With the Playground, you can input text prompts and receive real-time outputs, and you can adjust hyperparameters to control the creativity, randomness, length, and repetition of the model responses. In this course, you will begin by creating an account to use the OpenAI Playground and you will learn how you are billed for its usage. Next, you will explore the different chat modes and models and work with the hyperparameters that allow you to configure creativity, randomness, repetition, and the length of model responses. You will also use stop sequences, which terminate the output when a specific phrase is reached, as well as the frequency and presence penalty, which penalize repetition of words and topics. Finally, you will learn how to view probabilities in generated text and explore how to use presets to share prompts and prompt parameters with other people.
Version control systems allow you to track changes to your code over time and collaborate on projects. They are widely used in software development, but can also be used for other purposes, such as tracking changes to documentation, website code, or other types of files. Git is a popular version control system that has a steep learning curve for beginners but with help from generative AI tools you'll find that learning Git is easy and intuitive. In this course, you will start with the basics of Git and learn the difference between local Git repositories and remote repositories on hosting services such as GitHub and GitLab. You will develop prompts with generative AI tools such as ChatGPT and use their responses to guide you while you are exploring Git commands. Next, you will learn how to use Git for version control and how to add files to the staging area. After that, you will commit your files to your repository and view all of the commits. Finally, you will learn how to perform operations such as restoring and modifying staged files and how to use commit hashes to uniquely identify commits and perform operations on them.
In today's rapidly evolving technological landscape, generative artificial intelligence (AI) has gained significant attention for its ability to create intelligent solutions. This path focuses on leveraging the Azure cloud platform to explore and harness the power of generative AI. In this course, you'll explore how generative AI works and types of generative AI models. Then, you'll be introduced to Azure services for generative AI, including Azure OpenAI service, Azure Bot service, and Azure Machine Learning. Finally, you'll learn about privacy and policy considerations for generative AI, chatbot creation, personalized marketing content, new product development, and training and tuning generative AI models.
Artificial intelligence (AI) is being harnessed everywhere today for a myriad of different practical applications. Microsoft Azure's OpenAI service is a key component in the development of AI apps in Azure and has gained significant attention for its ability to create intelligent solutions. In this course, you'll learn about Azure OpenAI service, including models, practical uses, and AI content generation principles with OpenAI. Then, you'll explore integration with Azure OpenAI, text and question answering, OpenAI vs. other generative AI services, and OpenAI pricing. Finally, you'll dig into limitations of OpenAI and what the future holds for Azure OpenAI.
GitHub, in conjunction with Git, provides a powerful framework for collaboration in software development. Git handles version control locally, while GitHub extends this functionality by serving as a remote repository, enabling teams to collaborate seamlessly by sharing, reviewing, and managing code changes. In this course, you will begin by setting up a GitHub account and authenticating yourself from the local repo using personal access tokens. You will then push your code to the remote repository and view the commits. Next, you will explore additional features of Git and GitHub using generative AI tools as a guide. You will also create another user to collaborate on your remote repository, and you'll sync changes made by other users to your local repo. Finally, you will explore how to merge divergent branches. You will discover how to resolve a divergence using the merge method with help from ChatGPT and bring your local repository in sync with remote changes.
Branches are separate, independent lines of development for people working on different features. Once you have finished your work, you can merge all your branches together. You will start this course by creating separate feature branches on Git and pushing commits to these branches. You will use prompt engineering to get the right commands to use for branching and working on branches. You will also explore how to develop your code on the main branch, switch branches, and then ultimately commit to a feature branch. Next, you will explore how you can stash changes to your project to work on them later. Finally, you will discover how to resolve divergences in the branches. You will try out both the merge and rebase methods and confirm that the branch commits are combined properly.
Python is a powerful programming language for data science, and pandas is a popular open-source data manipulation and analysis library in Python. Combined with prompt engineering techniques, working with data in Python is easy and intuitive, which allows you to be more productive and efficient. You will start this course by leveraging prompt engineering to work with pandas. You will explore libraries such as Matplotlib, seaborn, and Plotly, which are used for visualization and charting. With ChatGPT's help you will read data from a CSV file and inspect the DataFrame. You'll delve into pandas Series objects and explore their creation and manipulation. You will leverage prompt engineering techniques to access elements in a Series using index labels through loc, iloc, at, and iat functions and perform operations like modification and visualization. Finally, you will explore how to use pandas DataFrame objects and create basic DataFrames using lists and dictionaries for data assignment and inspection. You will also generate code to perform basic operations on DataFrames using tools such as ChatGPT and Bard.
With DataFrames in pandas you can filter, aggregate, join, pivot, and manipulate data efficiently. These operations enable data analysts and scientists to work with datasets for various data-driven tasks. Prompt engineering tools are adept at generating code to make these tasks simple. You will start this course by exploring the configurations you can apply to read in your data. You'll present your problem statement to ChatGPT and explore the use of arguments to configure various aspects of the file reading, such as defining column names, and specifying which columns to include in the DataFrame. Additionally, you will learn how to read data from different sources, including JSON, Excel, and the Clipboard and write files out to these different formats. Next, you'll delve into common DataFrame operations, examine statistics on your data, rename columns, iterate over, and sort your data. As you encounter issues, you will turn to prompt engineering to help debug them. Finally, you'll explore how you can enhance your data using computed columns. You'll harness the power of two essential functions, apply and map, to transform your records. You will also focus on utilizing generative AI for code generation and you will employ the chain-of-thought prompting method to guide the chatbot in generating code effectively.
Artificial intelligence (AI) is transforming the way businesses and governments are developing and using information. This course offers an overview of AI, its history, and its use in real-world situations; prior knowledge of machine learning, neural network, and probabilistic approaches is recommended. There are multiple definitions of AI, but the most common view is that it is software which enables a machine to think and act like a human, and to think and act rationally. Because AI differs from plain programing, the programming language used will depend on the application. In this series of videos, you will be introduced to multiple tools and techniques used in AI development. Also discussed are important issues in its application, such as the ethics and reliability of its use. You will set up a programing environment for developing AI applications and learn the best approaches to developing AI, as well as common mistakes. Gain the ability to communicate the value AI can bring to businesses today, along with multiple areas where AI is already being used.
This course covers simple and complex types of AI (artificial intelligence) available in today's market. In it, you will explore theories of mind research, self-aware AI, artificial narrow intelligence, artificial general intelligence, and artificial super intelligence. First, learn the ways in which AI is used today in agriculture, medicine, by the military, in financial services, and by governments. As a special field of computer science that uses mathematics, statistics, cognitive and behavioral sciences, AI uses unique applications to perform actions based on data it uses as an input, and does so by mimicking the activity within the human brain. No data can be 100 percent accurate, bringing a certain degree of uncertainty to any kind of AI application. So this course seeks to explain how and why AI needs to be developed for a particular use scenario, helping you understand the many aspects involved in AI programming and how AI performance needs to be good enough to complete a certain task.
Images often require to be manipulated to extract meaningful portions of an image or prepare them for a machine learning pipeline. OpenCV can help with this. In this course, you'll investigate a variety of image manipulation operations using OpenCV. You'll begin by recognizing how to filter certain portions of an image using bitwise operations. Next, you'll explore the concept of masks and how to use them while extracting parts of an image. You'll then outline how to apply geometrical operations by resizing an image to specific dimensions and discover challenges that such operations present. You'll finish the course by examining image transformations such as rotations and translations to help orient an image to your requirements. Finally, you'll discover how to flip and warp images to present them from a different perspective.
Many image processing operations involve complex math, but when using OpenCV, much of that is abstracted from the developer. In this course, you'll gain a high-level understanding of advanced image operations in OpenCV. You'll begin by recognizing how to apply different blur operations to an image. These range from simple blurs to Gaussian and median blurs. While doing so, you'll examine their specific advantages and disadvantages and how to distinguish between them. Moving on, you'll outline how to highlight objects in an image using edge detection and augment images by adding shapes and objects to them. Finally, you'll discover how to work with pre-trained classifiers to detect people in an image and perform morphological transformations to emphasize or suppress specific parts of an image.
In developing AI (artificial intelligence) applications, it is important to play close attention to human-computer interaction (HCI) and design each application for specific users. To make a machine intelligent, a developer uses multiple techniques from an AI toolbox; these tools are actually mathematical algorithms that can demonstrate intelligent behavior. The course examines the following categories of AI development: algorithms, machine learning, probabilistic modelling, neural networks, and reinforcement learning. There are two main types of AI tools available: statistical learning, in which large amount of data is used to make certain generalizations that can be applied to new data; and symbolic AI, in which an AI developer must create a model of the environment with which the AI agent interacts and set up the rules. Learn to identify potential AI users, the context of using the applications, and how to create user tasks and interface mock-ups.
Human computer interaction (HCI) design is the starting point for an artificial intelligence (AI) program. Overall HCI design is a creative problem-solving process oriented to the goal of satisfying largest number of customers. In this course, you will cover multiple methodologies used in the HCI design process and explore prototyping and useful techniques for software development and maintenance. First, learn how the anthropomorphic approach to HCI focuses on keeping the interaction with computers similar to human interactions. The cognitive approach pays attention to the capacities of a human brain. Next, learn to use the empirical approach to HCI to quantitatively evaluate interaction and interface designs, and predictive modeling is used to optimize the screen space and make interaction with the software more intuitive. You will examine how to continually improve HCI designs, develop personas, and use case studies and conduct usability tests. Last, you will examine how to improve the program design continually for AI applications; develop personas; use case studies; and conduct usability tests.
In this course, you'll explore basic Computer Vision concepts and its various applications. You'll examine traditional ways of approaching vision problems and how AI has evolved the field. Next, you'll look at the different kinds of problems AI can solve in vision. You'll explore various use cases in the fields of healthcare, banking, retail cybersecurity, agriculture, and manufacturing. Finally, you'll learn about different tools that are available in CV.
In this course, you'll explore Computer Vision use cases in fields like consumer electronics, aerospace, automotive, robotics, and space. You'll learn about basic AI algorithms that can help you solve vision problems and explore their categories. Finally, you'll apply hands-on development practices on two interesting use cases to predict lung cancer and deforestation.
To implement cognitive modeling inside AI systems, a developer needs to understand the major differences between commonly used cognitive models and their best qualities. Today cognitive models are actively utilized in healthcare, neuroscience, manufacturing and psychology and their importance compared to other AI approaches is expected to rise. Developing a firm understanding of cognitive modeling and its use cases is essential to anyone involved in creating AI systems. In this course, you'll identify unique features of cognitive models, which help create even more intelligent software systems. First you will learn about the different types of cognitive models and the disciplines involved in cognitive modeling. Further, you will discover main use cases for cognitive models in the modern world and learn about the history of cognitive modeling and how it is related to computer science and AI.
Practice plays an important role in AI development and helps one get familiarized with commonly used tools and frameworks. Knowing which methods to apply and when is critical to completing projects quickly and efficiently. Based on code examples provided, you will be able to quickly learn important cognitive modeling libraries and apply this knowledge to new projects in the field. In this course, you'll learn the essentials of working with cognitive models in a software system. First, you will get a detailed overview of each type of learning used in cognitive modeling. Further, you will learn about the toolset used for cognitive modeling with Python and recall which role cognitive models play in AI and business. Finally, you will go through various cognitive model implementations to develop skills necessary to implement cognitive modeling in real world.
An Artificial Intelligence (AI) Architect works and interacts with various groups in an organization, including IT Architects and IT Developers. It is important to differentiate between the work activities performed by these groups and how they work together. This course will introduce you to the AI Architect role. You'll discover what the role is, why it's important, and who the architect interacts with on a daily basis. We will also examine and categorize their daily work activities and will compare those activities with those of an IT Architect and an IT Developer. The AI Architect helps many groups within the organization, and we will examine their activities within those groups as well. Finally, we will highlight the roles the AI Architect plays in the organizations which they are a member of.
In this course, you'll be introduced to the concepts, methodologies, and tools required for effectively and efficiently incorporating AI into your IT enterprise planning. You'll look at enterprise planning from an AI perspective, and view projects in tactical/strategic and current, intermediate, or future state contexts. You'll explore how to use an AI Maturity Model to conduct an AI Maturity Assessment of the current and future states of AI planning, and how to conduct a gap analysis between those states. Next, you'll learn about the components of a discovery map, project complexity, and a variety of graphs and tables that enable you to handle complexity. You'll see how complexity can be significantly reduced using AI accelerators and how they affect specific phases of the AI development lifecycle. You'll move on to examine how to create an AI enterprise roadmap using all of the artifacts just described, plus a KPIs/Value Metrics table, and how both of these can be used as inputs to an analytics dashboard. Finally, you'll explore numerous examples of AI applications of different types in diverse business areas.
The inner workings of many deep learning systems are complicated, if not impossible, for the human mind to comprehend. Explainable Artificial Intelligence (XAI) aims to provide AI experts with transparency into these systems. In this course, you'll describe what Explainable AI is, how to use it, and the data structures behind XAI's preferred algorithms. Next, you'll explore the interpretability problem and today's state-of-the-art solutions to it. You'll identify XAI regulations, define the "right to explanation", and illustrate real-world examples where this has been applicable. You'll move on to recognize both the Counterfactual and Axiomatic methods, distinguishing their pros and cons. You'll investigate the intelligible models method, along with the concepts of monotonicity and rationalization. Finally, you'll learn how to use a Generative Adversarial Network.
Adopting the foundational techniques of natural language processing (NLP), together with the Bidirectional Encoder Representations from Transformers (BERT) technique developed by Google, allows developers to integrate NLP pipelines into their projects efficiently and without the need for large-scale data collection and processing. In this course, you'll explore the concepts and techniques that pave the foundation for working with Google BERT. You'll start by examining various aspects of NLP techniques useful in developing advanced NLP pipelines, namely, those related to supervised and unsupervised learning, language models, transfer learning, and transformer models. You'll then identify how BERT relates to NLP, its architecture and variants, and some real-world applications of this technique. Finally, you'll work with BERT and both Amazon review and Twitter datasets to develop sentiment predictors and create classifiers.
Solid knowledge of the AI technology landscape is fundamental in choosing the right tools to use as an AI Architect. In this course, you'll explore the current and future AI technology landscape, comparing the advantages and disadvantages of common AI platforms and frameworks. You'll move on to examine AI libraries and pre-trained models, distinguishing their advantages and disadvantages. You'll then classify AI datasets and see a list of dataset topics. Finally, You'll learn how to make informed decisions about which AI technology is best suited to your projects.
AI architecture patterns, some of which have been known for many years, have been formally identified as such only in the last couple of years. In this course, you'll identify 12 reusable, standard AI architecture patterns, and 3 AI architecture anti-patterns frequently used to architect common AI applications. You'll learn to differentiate between architecture and design patterns and explore how they're used. Next, you'll examine the structure of an AI architecture pattern, and that of an anti-pattern and its different parts. You'll identify when specific patterns should or can be used, when they need to be avoided, and how to avoid using anti-patterns. You will also learn that even good patterns can become anti-patterns when applied to solve a problem they were not intended for.
Designing successful and competitive AI products involves thorough research on its existing application in various markets. Most large scale businesses use AI in their workflows to optimize business operations. AI Architects should be aware of all possible applications of AI so they can look at market trends and come up with the most appropriate, novel, and useful AI solutions for their industry. In this course, you'll explore examples of standard AI applications in various industries like Finance, Marketing, Sales, Manufacturing, Transportation, Cybersecurity, Pharmaceutical, and Telecommunications. You'll examine how AI is utilized by leading AI companies within each of these industries. You'll identify which AI technologies are common across all industries and which are industry-specific. Finally, you'll recognize why AI is imperative to the successful operation of many industries.
AI Practitioner is a cross-industry advanced AI Developer position that has a growing demand in the modern world. Candidates for this role need to demonstrate proficiency in optimizing and tuning AI solutions to deliver the best possible performance in the real world. AI Practitioners require more advanced knowledge of algorithm implementations and should have a firm knowledge of latest toolsets available. In this course, you'll be introduced to the AI Practitioner role in the industry. You'll examine an AI Practitioner's skillset and responsibilities in relation to AI Developers, Data Scientists, and ML and AI Engineers. Finally, you'll learn about the scope of work for AI Practitioners, including their career opportunities and pathways.
Optimization is required for any AI model to deliver reliable outcomes in most of the use cases. AI Practitioners use their knowledge of optimization techniques to choose and apply various solutions and improve accuracy of existing models. In this course, you'll learn about advanced optimization techniques for AI Development, including multiple optimization approaches like Gradient Descent, Momentum, Adam, AdaGrad and RMSprop optimization. You'll examine how to determine the preferred optimization technique to use and the overall benefits of optimization in AI. Lastly, you'll have a chance to practice implementing optimization techniques from scratch and applying them to real AI models.
Any aspiring AI developer has to clearly understand the responsibilities and expectations when entering the industry in this role. AI Developers can come from various backgrounds, but there are clear distinctions between this role and others like Software Engineer, ML Engineer, Data Scientist, or AI Engineers. Therefore, any AI Developer candidate has to posses the required knowledge and demonstrate proficiency in certain areas. In this course you will learn about the AI Developer role in the industry and compare the responsibilities of AI Developers with other engineers involved in AI development. After completing the course, you will recognize the mindset required to become a successful AI Developer and become aware of multiple paths for career progression and future opportunities
A working knowledge of multiple AI development frameworks is essential to AI developers. Depending on the particular focus, you may decide on a particular framework of your choice. However, various companies in the industry tend to use different frameworks in their products, so knowing the basics of each framework is quite helpful to the aspiring AI Developer. In this course you will explore popular AI frameworks and identify key features and use cases. You will identify main differences between AI frameworks and work with Microsoft CNTK and Amazon SageMaker to implement model flow.
Robots can utilize machine learning, deep learning, reinforcement learning, as well as probabilistic techniques to achieve intelligent behavior. This application of AI to robotic systems is found in the automotive, healthcare, logistics, and military industries. With increasing computing power and sophistication in small robots, more industry use cases are likely to emerge, making AI development for robotics a useful AI developer skill. In this course, you'll explore the main concepts, frameworks, and approaches needed to work with robotics and apply AI to robots. You'll examine how AI and robotics are used across multiple industries. You'll learn how to work with commonly used algorithms and strategies to develop simple AI systems that improve the performance of robots. Finally, you'll learn how to control a robot in a simulated environment using deep Q-networks.
Cognitive modeling can provide additional human qualities to AI systems. It is traditionally used in cognitive machines and expert systems. However, with extra computing power, it can be applied to more profound AI approaches like neural networks and reinforcement learning systems. Knowledge of cognitive modeling applications is essential to any AI developer aspiring to design AI architectures and develop large-scale applications. In this course, you'll examine the role of cognitive modeling in AI development and its possible applications in NLP, image recognition, and neural networks. You'll outline core cognitive modeling concepts and significant industry use cases. You'll list open source cognitive modeling frameworks and explore cognitive machines, expert systems, and reinforcement learning in cognitive modeling. Finally, you'll use cognitive models to solve real-world problems.
The world of technology continues to transform at a rapid pace, with intelligent technology incorporated at every stage of the business process. Intelligent information systems (IIS) reduce the need for routine human labor and allow companies to focus instead on hiring creative professionals. In this course, you'll explore the present and future roles of intelligent informational systems in AI development, recognizing the current demand for IIS specialists. You'll list several possible IIS applications and learn about the roles AI and ML play in creating them. Next, you'll identify significant components of IIS and the purpose of these components. You'll examine how you would go about creating a self-driving vehicle using IIS components. Finally, you'll work with Python libraries to build high-level components of a Markov decision process.
Bidirectional Encoder Representations from Transformers (BERT), a natural language processing technique, takes the capabilities of language AI systems to great heights. Google's BERT reports state-of-the-art performance on several complex tasks in natural language understanding. In this course, you'll examine the fundamentals of traditional NLP and distinguish them from more advanced techniques, like BERT. You'll identify the terms attention and transformer and how they relate to NLP. You'll then examine a series of real-life applications of BERT, such as in SEO and masking. Next, you'll work with an NLP pipeline utilizing BERT in Python for various tasks, namely, text tokenization and encoding, model definition and training, and data augmentation and prediction. Finally, you'll recognize the benefits of using BERT and TensorFlow together.
Bidirectional Encoder Representations from Transformers (BERT) can be implemented in various ways, and it is up to AI practitioners to decide which one is the best for a particular product. It is also essential to recognize all of BERT's capabilities and its full potential in NLP. In this course, you'll outline the theoretical approaches to several BERT use cases before illustrating how to implement each of them. In full, you'll learn how to use BERT for search engine optimization, sentence prediction, sentence classification, token classification, and question answering, implementing a simple example for each use case discussed. Lastly, you'll examine some fundamental guidelines for using BERT for content optimization.
Tuning hyper parameters when developing AI solutions is essential since the same models might behave quite differently with different parameters set. AI Practitioners recognize multiple hyper parameter tuning approaches and are able to quickly determine best set of hyper parameters for particular models using AI toolbox. In this course, you'll learn advanced techniques for hyper parameter tuning for AI development. You'll examine how to recognize the hyper parameters in ML and DL models. You'll learn about multiple hyper parameter tuning approaches and when to use each approach. Finally, you'll have a chance to tune hyper parameters for a real AI project using multiple techniques.
In recent times, natural language processing (NLP) has seen many advancements, most of which are in deep learning models. NLP as a problem is very complicated, and deep learning models can handle that scale and complication with many different variations of neural network architecture. Deep learning also has a broad spectrum of frameworks that supports NLP problem solving out-of-the-box. Explore the basics of deep learning and different architectures for NLP-specific problems. Examine other use cases for deep learning NLP across industries. Learn about various tools and frameworks used such as - Spacy, TensorFlow, PyTorch, OpenNMT, etc. Investigate sentiment analysis and explore how to solve a problem using various deep learning steps and frameworks. Upon completing this course, you will be able to use the essential fundamentals of deep learning for NLP and outline its various industry use cases, frameworks, and fundamental sentiment analysis problems.
Natural language processing (NLP) is constantly evolving with cutting edge advancements in tools and approaches. Neural network architecture (NNA) supports this evolution by providing a method of processing language-based information to solve complex data-driven problems. Explore the basic NNAs relevant to NLP problems. Learn different challenges and use cases for single-layer perceptron, multi-layer perceptron, and RNNs. Analyze data and its distribution using pandas, graphs, and charts. Examine word vector representations using one-hot encodings, Word2vec, and GloVe and classify data using recurrent neural networks. After you have completed this course, you will be able to use a product classification dataset to implement neural networks for NLP problems.
In the journey to understand deep learning models for natural language processing (NLP), the subsequent iterations are memory-based networks, which are much more capable of handling extended context in languages. While basic neural networks are better than machine learning (ML) models, they still lack in more significant and large language data problems. In this course, you will learn about memory-based networks like gated recurrent unit (GRU) and long short-term memory (LSTM). Explore their architectures, variants, and where they work and fail for NLP. Then, consider their implementations using product classification data and compare different results to understand each architecture's effectiveness. Upon completing this course, you will have learned the basics of memory-based networks and their implementation in TensorFlow to understand the effect of memory and more extended context for NLP datasets.
The essential aspect of human intelligence is our learning processes, constantly augmented with the transfer of concepts and fundamentals. For example, as a child, we learn the basic alphabet, grammar, and words, and through the transfer of these fundamentals, we can then read books and communicate with people. This is what transfer learning helps us achieve in deep learning as well. This course will help you learn the fundamentals of transfer learning for NLP, its various challenges, and use cases. Explore various transfer learning models such as ELMo and ULMFiT. Upon completing this course, you will understand the transfer learning methodology of solving NLP problems and be able to experiment with various models in TensorFlow.