ForgeRock

Is ForgeRock Directory Services Course The Next Best ForgeRock Course?

What is ForgeRock Directory Services Core Concept?

ForgeRock® Directory Services Core Concepts is a 5-day course designed for students who want to learn how to utilize ForgeRock® Directory Services (DS) as a standalone service or as part of a ForgeRock Identity PlatformTM (Platform). Deploying and maintaining DS are done based on current best practices. Students are given hands-on experience with DS’s features and capabilities in this course, which they can put to use when deploying DS on the job.

If this course sounds like something you would enjoy learning about, you have come to the perfect place. P2L is happy to announce that the ForgeRock Directory Services Core Concepts course is available for you to enroll in as soon as today.

Find out more about the course next!


Who can take this course?

This course is intended for the following audiences:

  • System Integrators
  • Consultants for system design
  • Architects of systems
  • A system developer
  • The Administrators of the systems
     

Objectives

The following competencies should be achieved at the end of this course:

  • Learn how DS is used in a platform deployment
  • Install DS as a standalone service or as an external data store for the Platform
  • Configure DS using setup profiles during installation
  • Managing backend data stores
  • Monitoring and tuning DS via HTTP and LDAP for different deployment types
  • Establish privileges and access controls
  • Delegate administrative control
  • Backup and restore of backends.

    And many more.

What are the key prerequisites?

To complete this course successfully, you must meet the following requirements:

  • Knowledge and skills necessary to use Linux to complete labs
  • Basic knowledge of LDAP, JSON, REST, Java

 

Course Contents

The main course chapters to be discussed are as follows:

 1: Introducing ForgeRock Directory Services (DS)

2: Maintaining DS in an AM Deployment

 3: Deploying DS as a User Store

4: Maintaining DS in a ForgeRock Identity Management Deployment (IDM)

 5: Creating a Distributed Topology


To take advantage of this course, enroll with P2L Today!

Manage Big Data Better with Cloudera Data Analyst Training

Manage Big Data Better with Cloudera Data Analyst Training

What is Cloudera?

Talend explains that Cloudera is a software company that, for more than a decade, has provided a structured, flexible, and scalable platform, enabling sophisticated analysis of big data using Apache Hadoop, in any environment.

In 2008, key engineers from Facebook, Google, Oracle, and Yahoo came together to create Cloudera. The idea arose from the need to create a product to help everyone harness the power of Hadoop distribution software.

For years, Hadoop had helped businesses and other organizations store, sort, and analyze large volumes of data. Cloudera was launched to help users deploy and manage Hadoop, bringing order and understanding to the data that serves as the lifeblood of any modern organization.

Cloudera allows for a depth of data processing that goes beyond just data accumulation and storage. Cloudera’s enhanced capabilities provide the power to analyze data rapidly and easily while tracking and securing it across all environments. By using Cloudera’s comprehensive audits and lineage tracing, users can know where data originated and why it matters.

What is the Cloudera Data Analyst Training course all about?

Cloudera Educational Services’ four-day Data Analyst Training course will teach you to apply traditional data analytics and business intelligence skills to big data. This course presents the tools data professionals need to access, manipulate, transform, and analyze complex data sets using SQL and familiar scripting languages.

What to Expect?

Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the ecosystem, learning:

  • How the open-source ecosystem of big data tools addresses challenges not met by traditional RDBMSs
  • Using Apache Hive and Apache Impala to provide SQL access to data
  • Hive and Impala syntax and data formats, including functions and subqueries
  • Create, modify, and delete tables, views, and databases; load data; and store results of queries
  • Create and use partitions and different file formats
  • Combining two or more datasets using JOIN or UNION, as appropriate
  • What analytic and windowing functions are, and how to use them
  • Store and query complex or nested data structures
  • Process and analyze semi-structured and unstructured data
  • Techniques for optimizing Hive and Impala queries
  • Extending the capabilities of Hive and Impala using parameters, custom file formats, and SerDes, and external scripts
  • How to determine whether Hive, Impala, an RDBMS, or a mix of these is best for a given task

Target Audience & Prerequisites

This course is designed for data analysts, business intelligence specialists, developers, system architects, and database administrators. Some knowledge of SQL is assumed, as is basic Linux command-line familiarity. Prior knowledge of Apache Hadoop is not required.

Get Certified

Upon completion of the course, attendees are encouraged to continue their studies and register for the CCA Data Analyst exam. Certification is a great differentiator. It helps establish you as a leader in the field, providing employers and customers with tangible evidence of your skills and expertise.

Advance your ecosystem expertise

Apache Hive makes transformation and analysis of complex, multi-structured data scalable in Cloudera environments. Apache Impala enables real-time interactive analysis of the data stored in Hadoop using a native SQL environment. Together, they make multi-structured data accessible to analysts, database administrators, and others without Java programming expertise.

Course Contents

Introduction

Apache Hadoop Fundamentals

Introduction to Apache Hive and Impala

Querying with Apache Hive and Impala

Common Operators and Built-In Functions

Data Management

Data Storage and Performance

Working with Multiple Datasets

Analytic Functions and Windowing

Complex Data

Analyzing Text

Apache Hive Optimization

Apache Impala Optimization

Extending Apache Hive and Impala

Choosing the Best Tool for the Job

Conclusion

Summary

Big data has been a hot topic for over five years now. To manage, process, and understand this data, it is important for companies to work with tools that can help systematically extract data and design creative and smart solutions.

Explore Cloudera Data Analyst Training course to streamline handling and managing big data.

To enroll, contact P2L today!

Cloudera Administrator Training for Apache Hadoop 101

Cloudera Administrator Training for Apache Hadoop 101

What is Cloudera?

Cloudera Data Platform is the industry’s first enterprise data cloud:

  • Multi-function analytics on a unified platform that eliminates silos and speeds the discovery of data-driven insights
  • A shared data experience that applies consistent security, governance, and metadata
  • True hybrid capability with support for public cloud, multi-cloud, and on-premises deployments

What is Apache?

Wpbeginner describes Apache as the most widely used web server software. Developed and maintained by Apache Software Foundation, Apache is an open-source software solution available for free. It runs on 67% of all webservers in the world. It is fast, reliable, and secure. It can be highly customized to meet the needs of many different environments by using extensions and modules. Most WordPress hosting providers use Apache as their web server software. However, WordPress can run on other web server software as well.

What is Apache Hadoop?

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

What is the Cloudera Administrator Training for Apache Hadoop course all about?

Cloudera University’s four-day Administrator training course is for Apache Hadoop provides participants with a comprehensive understanding of all the steps necessary to operate and maintain a Hadoop cluster using Cloudera Manager. From the installation and configuration through load balancing and tuning, Cloudera’s training course is the best preparation for the real-world challenges faced by Hadoop administrators.

Duration

4 Days

 Objectives

Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the Hadoop ecosystem, learning topics such as:

  • Cloudera Manager features that make managing your clusters easier, such as aggregated logging, configuration management, resource management, reports, alerts, and service management
  • Configuring and deploying production-scale clusters that provide key Hadoop-related services, including YARN, HDFS, Impala, Hive, Spark, Kudu, and Kafka
  • Determining the correct hardware and infrastructure for your cluster
  • Proper cluster configuration and deployment to integrate with the data center
  • How to load file-based and streaming data into the cluster using Kafka and Flume
  • Configuring automatic resource management to ensure service-level agreements are met for multiple users of a cluster
  • Best practices for preparing, tuning, and maintaining a production cluster
  • Troubleshooting, diagnosing, tuning, and solving cluster issues

Target Audience and Prerequisites

This course is best suited to systems administrators and IT managers who have basic Linux experience. Prior knowledge of Apache Hadoop, Cloudera Enterprise, or Cloudera Manager is not required.

Hands-On Exercises

Throughout the course, hands-on exercises help students build their knowledge and apply the concepts being discussed.

Certification Exam

Upon completion of the course, attendees are encouraged to continue their studies and register for the CCA Administrator certification exam. Certification is a great differentiator. It helps establish you as a leader in the field, providing employers and customers with tangible evidence of your skills and expertise.

Course Details

  • The Cloudera Enterprise Data Hub
  • Installing Cloudera Manager and CDH
  • Configuring a Cloudera Cluster
  • Hadoop Distributed File System
  • HDFS Data Ingest
  • Hive and Impala
  • YARN and MapReduce
  • Apache Spark
  • Planning Your Cluster
  • Advanced Cluster Configuration
  • Managing Resources
  • Cluster Maintenance
  • Monitoring Clusters
  • Cluster Troubleshooting
  • Installing and Managing Hue
  • Security
  • Apache Kudu
  • Apache Kafka
  • Object Storage in the Cloud

Conclusion

Apache Hadoop is one of a kind. It allows organizations to store and analyze unlimited amounts and types of data—all in a single, open-source platform on industry-standard hardware.
Take up the Cloudera Administrator Training for Apache Hadoop course and accelerate the process of discovering patterns in data in all amounts and formats.

To enroll, contact P2L today!

ForgeRock Access Management Core Concept

The ForgeRock Access Management’s Course That Will Help You Succeed

What is ForgeRock Access Management?

Access Management from ForgeRock is a comprehensive, unified solution that quickly enables outstanding experiences that are tailor-made for the unique requests of your users and employees. With ForgeRock, each user’s digital journey through your organization is seamless and secure.

If this sounds like something you would be interested in, you’ve come to the right place! P2L is proud to announce that it will be offering one of the top ForgeRock Management courses: ForgeRock Access Management Core Concepts.

ForgeRock Access Management Core Concepts

This 5-day intensive course is a combination of instructor-led lessons and demonstrations as well as lab exercises designed to ensure that each topic is fully understood. During this course, students will gain an understanding of ForgeRock® Access Management (AM) design, installation, configuration, and administration.

It presents the core concepts of access management, illustrates the many features of AM, and provides hands-on experience for students to implement a full solution based on real-life use cases, including many ready-to-use features.

 

Target Audiences

The course is designed for those who are responsible for overseeing ForgeRock AM deployments. These can include, but are not limited to, the following:

  • Integrators of systems
  • Consultants for system integration
  • Architects of systems
  • Developers of systems
  • IT System Administrator


    Learning Objectives

The following should be your abilities upon completing this course:

  • Amend authentication to implement the default authentication
  • Set up web agents to control access
  • Basic self-registration flow for users
  • Using trees to configure intelligent authentication
  • Manage identity stores

    And many more.

    Prerequisites

For this course to be successful, the following prerequisites must be met:

  • Understanding of Unix/Linux commands and text editing
  • Understanding HTTP and web applications
  • What a directory server does in a basic sense
  • Knowledge of REST is essential
  • Having some familiarity with Java-based environments would be helpful. Experience with programming is not required*


Enroll today by contacting P2L!

 

Tap Into The Hybrid Cloud World With Nutanix

Tap Into The Hybrid Cloud World With Nutanix

What is Nutanix?

According to Techzine, Nutanix is a so-called hyper-converged Infrastructure (HCI) solution. A software platform (cluster) that runs on top of different kinds of individual servers (nodes). All these servers are linked together through this HCI software platform. All processors, internal memory, hard disks (storage), and network interfaces are bundled in one cluster to run virtual machines.

The powerful thing about an HCI platform is the way all applications and workloads are distributed across the hardware, to optimize performance as much as possible. There is also redundancy built-in by dividing data and workloads across multiple servers. As soon as one of the nodes fails, the availability of the platform and applications is not affected.

It is now a pure software company, but this hasn’t always been the case. It once started with its own hardware appliance on which the Nutanix software ran. By buying multiple Nutanix appliances (nodes), you could build your own hyper-converged infrastructure. The company is now so big that they no longer need to develop and deliver the hardware itself. The major hardware manufacturers are now lining up to partner with them.

It is great in developing an HCI software platform, but not in building the best hardware. When companies choose Nutanix, they can call Nutanix for software issues, but call their hardware supplier for hardware-related incidents. By doing it this way, customers have the best experience and the best support.

What are the Nutanix Advanced Administration and Performance Management course about?

This course features comprehensive coverage of performance management for Nutanix clusters and details on how to improve data center performance. You’ll learn through hands-on labs how to monitor system performance as well as performance tuning. Also covered are advanced networking and storage to help optimize data center administration.

This course explains in detail how to use the major Acropolis services such as Volumes and Files. The course also explains how to define and manage assets and applications using Calm, including how to connect to clouds, automation of the Life Cycle Management (LCM) application, and how to implement and configure Self Service Portal and governance.

You will learn how to take advantage of Flash mode to improve system performance, as well as how to effectively clone and delete VMs, move them between storage containers, and how to manage VMs (sizing and migration).

This course also covers Data Protection solutions such as Metro Availability with Witness. Advanced management using the new features of Prism Central, and the command line are also covered in detail. This includes how to take advantage of machine learning with entity management and resource optimization and plan for future growth using Scenario Management using Prism Pro.

Target Audience:

IT administrators, architects, and business leaders who already manage Nutanix clusters in the data center, but who would like more in-depth knowledge of Nutanix data center administration. Anyone seeking preparation for the Nutanix Certified Advanced Professional (NCAP) certification (in developments) leaders who already manage Nutanix clusters in the data center, but who would like more in-depth knowledge of Nutanix data center administration. Anyone seeking to prepare for the Nutanix Certified Advanced Professional (NCAP) certification.

Course Objectives:

After completing this course, you should be able to:

  • Implement business continuity and disaster recovery strategies
  • Analyze and configure Nutanix systems for peak operational efficiency
  • Use Nutanix tools to analyze workloads and optimize cluster and VM sizing
  • Perform advanced virtual machine administration
  • Customize security for Nutanix systems
  • Anticipate and plan for future resource needs

Prerequisites:

Attendees should meet the following prerequisites:

  • Nutanix Enterprise Cloud Administration 5.5 (ECA 5.5) classroom training or NCP Certification
  • Basic knowledge of Nutanix datacenter administration techniques
  • Familiarity with traditional virtualization storage architectures
  • Comfortable with Linux command-line interface

Course Content:

Module 1: Administering Advanced Virtual Machine Deployments

Module 2: Implementing Business Continuity and Disaster Recovery

Module 3: Configuring Advanced Networking

Module 4: Enabling and Customizing Security Services

Module 5: Managing Acropolis File and Block Services

Module 6: Administering Prism Central and Prism Pro

Module 7: Managing and Optimizing Performance

Module 8: Utilizing Advanced Management Interfaces

Key learnings:

During this course you will learn how to:

Monitor data center performance and manage components to optimize system performance.

Set up and configure advanced VM administration features such as:

  • Self Service Restore
  • Configuration of Nutanix Guest Tools (NGT)
  • Working with Nutanix storage containers to delete and move vDisks

Implement advanced solutions for business continuity and data protection in Nutanix data centers such as:

  • Cloud Connect
  • Metro Availability
  • Advanced API
  • REST API V3

Configure advanced networking features including:

  • Bridge and uplink management
  • Load balancing across multiple NICs Network visualization
  • Physical switch topology and configuration

Customize Nutanix security features such as:

  • Creating and installing SSH keys for Prism Lockdown Mode
  • Two-factor authentication
  • Using Security Technical Implementation Guides (STIGs)

Eliminate the requirement for a third-party file server when sharing files across user work stations or VMs (Nutanix Files) or designing a scale-out storage solution (Nutanix Volumes).

Use Prism Central to:

  • Identify and fix cluster health problems
  • Exploit machine learning for entity management and resource optimization
  • Plan for future growth
  • Manage assets and applications using Calm, Life Cycle Management (LCM), and Self-Service Portal

Practice advanced data center management procedures using hands-on labs.

Get the most out of Nutanix systems by maximizing configuration and operation for peak efficiency.

Guarantee business continuity through advanced data protection strategies.

Validate your new skills by preparing for and completing the (NCAP) certification (in development).

Conclusion:

If you are looking to learn how to work with one of the most feasible and robust hybrid and multi-cloud solutions, this Nutanix course is the finest there is.

To enroll, contact P2L today!

The Power of Apache Spark and Hadoop

The Power of Apache Spark and Hadoop

What is Apache Spark?

According to IBM, Apache Spark (Spark) is an open-source data-processing engine for large data sets. It is designed to deliver the computational speed, scalability, and programmability required for Big Data—specifically for streaming data, graph data, machine learning, and artificial intelligence (AI) applications.

Spark’s analytics engine processes data 10 to 100 times faster than alternatives. It scales by distributing processing work across large clusters of computers, with built-in parallelism and fault tolerance. It even includes APIs for popular programming languages among data analysts and data scientists, including Scala, Java, Python, and R.

What is Apache Hadoop?

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing. The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

What is the Cloudera Developer Training for Spark & Hadoop course about?

This four-day hands-on training course delivers developers’ key concepts and expertise to use Apache Spark to develop high-performance parallel applications. Participants will learn how to use Spark SQL to query structured data and Spark Streaming to perform real-time processing on streaming data from a variety of sources. Developers will also practice writing applications that use core Spark to perform ETL processing and iterative algorithms. The course covers how to work with “big data” stored in a distributed file system and execute Spark applications on a Hadoop cluster. After taking this course, participants will be prepared to face real-world challenges and build applications to execute faster decisions, better decisions, and interactive analysis, applied to a wide variety of use cases, architectures, and industries.

Course Objectives:

  • How the Apache Hadoop ecosystem fits in with the data processing lifecycle
  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to write, configure, and deploy Apache Spark applications on a Hadoop cluster
  • How to use the Spark shell and Spark applications to explore, process, and analyze distributed data • How to query data using Spark SQL, DataFrames, and Datasets
  • How to use Spark Streaming to process a live data stream

Prerequisites:

This course is designed for developers and engineers who have programming experience, but prior knowledge of Spark and Hadoop is not required. Apache Spark examples and hands-on exercises are presented in Scala and Python. The ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful.

Get Certified!

Upon completion of the course, attendees are encouraged to continue their studies and register for the CCA Spark and Hadoop Developer exam. Certification is a great differentiator. It helps establish you as a leader in the field, providing employers and customers with tangible evidence of your skills and expertise.

Topics:

  • Introduction
  • Introduction to Apache Hadoop and the Hadoop Ecosystem
  • Apache Hadoop Overview
  • Apache Hadoop File Storage
  • Distributed Processing on an Apache Hadoop Cluster
  • Apache Spark Basics
  • Working with DataFrames and Schemas
  • Analyzing Data with DataFrame Queries
  • RDD Overview
  • Transforming Data with RDDs

Conclusion:

Apache Spark and Apache Hadoop are two of the most promising and prominent distributed systems for processing big data in the machine learning world today.

To get a good understanding and learn the difference between the two systems, opt for this comprehensive course that sheds light on how to work with big data.

To enroll, contact P2L today!

Ace the AI Game with TensorFlow and Apache Spark

Ace the AI Game with TensorFlow and Apache Spark

What is Google TensorFlow?

According to Guru99, Google TensorFlow is an open-source end-to-end platform for creating Machine Learning applications. It is a symbolic math library that uses dataflow and differentiable programming to perform various tasks focused on the training and inference of deep neural networks. It allows developers to create machine learning applications using various tools, libraries, and community resources.

What is the history of TensorFlow?

According to Guru99, a couple of years ago, deep learning started to outperform all other machine learning algorithms when giving a massive amount of data. Google saw it could use these deep neural networks to improve its services:

  • Gmail
  • Photo
  • Google search engine

They build a framework called Tensorflow to let researchers and developers work together on an AI model. Once developed and scaled, it allows lots of people to use it.

It was first made public in late 2015, while the first stable version appeared in 2017. It is open source under the Apache Open-Source license. You can use it, modify it, and redistribute the modified version for a fee without paying anything to Google.

How does TensorFlow work?

According to Guru99, TensorFlow enables you to build dataflow graphs and structures to define how data moves through a graph by taking inputs as a multi-dimensional array called Tensor. It allows you to construct a flowchart of operations that can be performed on these inputs, which goes at one end and comes at the other end as output.

Why is it called TensorFlow?

According to Guru99, It is called Tensorflow because it takes input as a multi-dimensional array, also known as tensors. You can construct a sort of flowchart of operations (called a Graph) that you want to perform on that input. The input goes in at one end, and then it flows through this system of multiple operations and comes out the other end as output.

Therefore, it is called TensorFlow because the tensor goes in it flows through a list of operations, and then it comes out the other side.

What is the Scalable Deep Learning with TensorFlow and Apache Spark course about?

This course starts with the basics of the tf.keras API including defining model architectures, optimizers, and saving/loading models. You then implement more advanced concepts such as callbacks, regularization, TensorBoard, and activation functions.

After training your models, you will integrate the MLflow tracking API to reproduce and version your experiments. You will also apply model interpretability libraries such as LIME and SHAP to understand how the network generates predictions. You will also learn about various Convolutional Neural Networks (CNNs) architectures and use them as a basis for transfer learning to reduce model training time.

Substantial class time is spent on scaling your deep learning applications, from distributed inference with pandas UDFs to distributed hyperparameter search with Hyperopt to distributed model training with Horovod. This course is taught fully in Python.

Course Duration:

Two days

Objectives:

Upon completion of the course, students should be able to:

  • Build deep learning models using Keras/TensorFlow
  • Scale the following:
    • Model inference with pandas UDFs & pandas function API
    • Hyperparameter tuning with HyperOpt
    • Training of distributed TensorFlow models with Horovod
  • Track, version, and reproduce experiments using MLflow
  • Apply model interpretability libraries to understand & visualize model predictions
  • Use CNNs (convolutional neural networks) and perform transfer learning & data augmentation to improve model performance
  • Deploy deep learning models

Target Audience:

  • Data scientist
  • Machine learning engineer

Prerequisites:

  • Intermediate experience with Python/pandas
  • Familiarity with machine learning concepts
  • Experience with PySpark

Additional Notes:

  • The appropriate, web-based programming environment will be provided to students
  • This class is taught in Python only

Topics:

  • Intro to Neural Networks with Keras
  • MLflow
  • Convolutional Neural Networks
  • Deep Learning Pipelines
  • Horovod

Conclusion:

Google’s TensorFlow is currently the most famous and sought-after deep learning library because of its high accessibility. Google aims to provide its users with the best AI experience which it achieves with TensorFlow.

If you want to learn more about this deep learning framework then this course is ideal for you.

To enroll, contact P2L today!

VMware Horizon

VMware Horizon 7: Desktop and App Virtualization Reimagined

VMware Horizon

VMware Horizon facilitates a digital workspace by efficiently delivering virtual desktops and applications equipping workers anywhere, anytime, and on any device. With deep integration into the VMware technology ecosystem, the platform offers an agile cloud-ready foundation, modern best-in-class management, and end-to-end security.

Horizon 7 - Benefits

  • With Horizon 7, IT organizations can take advantage of closed-loop management and automation, and tight integration with the software-defined data center, to deliver and protect all the Windows or Linux and online resources users want, at the speed they expect, with the efficiency business demands.
  • VMware Horizon 7 offers greater simplicity, security, speed, and scale in delivering on-premises virtual desktops and applications while offering cloud-like economics and elasticity of scale.
  • Horizon 7 introduces a robust suite of security and policy-focused capabilities that help customers improve their overall security posture, with a multi-layered, defense-in-depth approach that goes from client endpoint to data center to the extended virtual infrastructure.

VMware Horizon 7: Install, Configure, Manage [V7.10] - About the Course

Training in VMware Horizon 7 is easily available and accessible.

P2L has partnered up with VMware to offer a 5-day, hands-on course that gives you the skills to deliver virtual desktops and applications through a single virtual desktop infrastructure platform. This course builds your skills in installing, configuring, and managing VMware Horizon® 7 through a combination of lecture and hands-on labs. You learn how to configure and deploy pools of virtual machines, how to manage the access and security of the machines, and how to provide a customized desktop environment to end-users.

The course focuses on the following skills:

  • Recognize the features and benefits of VMware Horizon
  • Install and configure VMware Horizon® Connection Server™
  • Create and optimize Windows VMs to create VMware Horizon desktops
  • Describe the purpose of Horizon Agent
  • Compare the remote display protocols that are available in VMware Horizon
  • Configure and manage the VMware Horizon® Client™ systems and connect the client to a VMware Horizon desktop
  • Configure, manage, and entitle automated pools of full VMs
  • Configure, manage, and entitle pools of instant-clone desktops and linked-clone desktops
  • Install and configure View Composer
  • Outline the steps and benefits for using TLS CA-signed certificates in VMware Horizon environments
  • Use the role-based delegation to administer a VMware Horizon environment
  • Configure secure access to VMware Horizon desktops
  • Understand and create Remote Desktop Services (RDS) desktops and application pools
  • Install and configure App Volumes to deliver and manage applications
  • Deploy VMware Dynamic Environment Manager™ for user and application management
  • Install and configure a Just-in-Time Management Platform (JMP) server for managing JMP components
  • Describe VMware Dynamic Environment Manager Smart Policies
  • Use the command-line tools available in VMware Horizon to back up and restore the required VMware Horizon databases.
  • Manage the performance and scalability of a VMware Horizon deployment
  • Identify the benefits of the Cloud Pod Architecture feature for large-scale VMware Horizon deployments.

Who Can Benefit from this course?

Technical personnel who work in the IT departments of end-customer companies and people who are responsible for the delivery of remote or virtual desktop services.

Prerequisites Skills

  • VMware infrastructure skills 
  • Microsoft Windows system administration experience:Course Details
  • Use VMware vSphere® Web Client to view the state of virtual machines, datastores, and networks
  • Open a virtual machine console on VMware vCenter Server® and access the guest operating system
  • Create snapshots of virtual machines
  • Configure guest customization specifications
  • Modify virtual machine properties
  • Convert a virtual machine into a template
  • Deploy a virtual machine from a template
  • Configure Active Directory services, including DNS, DHCP, and time synchronization
  • Restrict user activities by implementing Group Policy objects
  • Configure Windows systems to enable Remote Desktop Connections
  • Build an ODBC connection to an SQL Server database

Begin your journey and contact P2L today for more information on this course.

 

A Guide To Scalable Machine Learning with Apache Spark

A Guide To Scalable Machine Learning with Apache Spark

What is Apache Spark?

Infoworld describes Spark as a data processing framework that can quickly perform processing tasks on very large data sets and can also distribute data processing tasks across multiple computers, either on its own or in tandem with other distributed computing tools. These two qualities are key to the big data and machine learning worlds, which require the marshaling of massive computing power to crunch through large data stores. Spark also takes some of the programming burdens of these tasks off the shoulders of developers with an easy-to-use API that abstracts away much of the grunt work of distributed computing and big data processing.

What is the story of Spark?

As per Towards Data Science, in the 2010s, when RAM prices came down, Spark was born with a big design change to store all intermediate data to RAM, instead of disk.

Spark was good for both:

  1. i) Data-heavy tasks: as it was using HDFS &
  2. ii) Compute-heavy tasks: as it uses RAM instead of disk, to store intermediate outputs. E.g.: Iterative solutions

As Spark could utilize RAM, it became an efficient solution for iterative tasks in Machine Learning like Stochastic Gradient Descent (SGD). So is the reason, Spark MLlib became so popular for Machine Learning, in contrast to Hadoop’s Mahout.

Furthermore, to do Distributed Deep-Learning with TF you can use,

  1. Multiple GPUs on the same box (or)
  2. Multiple GPUs on different boxes (GPU Cluster)

While today’s supercomputers use GPU Cluster for compute-intensive tasks, you can install Spark in such a cluster to make it suitable for tasks such as distributed deep-learning, which are both compute and data-intensive.

What is the Scalable Machine Learning with Apache Spark course all about?

In this course, you will experience the full data science workflow, including data exploration, feature engineering, model building, and hyperparameter tuning. You will have built an end-to-end distributed machine learning pipeline ready for production by the end of this course.

This course guides students through the process of building machine learning solutions using Spark. You will build and tune ML models with SparkML using transformers, estimators, and pipelines. This course highlights some of the key differences between SparkML and single-node libraries such as sci-kit-learn. Furthermore, you will reproduce your experiments and version your models using MLflow.

You will also integrate 3rd party libraries into Spark workloads, such as XGBoost. In addition, you will leverage Spark to scale inference of single-node models and parallelize hyperparameter tuning. This course includes hands-on labs and concludes with a collaborative capstone project. All the notebooks are available in Python, and Scala as well where available.

Skills Gained:

  • Create data processing pipelines with Spark
  • Build and tune machine learning models with SparkML
  • Track, version, and deploy models with MLflow
  • Perform distributed hyperparameter tuning with Hyperopt
  • Use Spark to scale the inference of single-node models

Who Can Benefit?

  • Data scientist
  • Machine learning engineer

Prerequisites:

Intermediate experience with Python Beginning experience with the PySpark DataFrame API (or have taken the Apache Spark Programming with Databricks class) Working knowledge of machine learning and data science

Conclusion:

If you’re looking to learn a big data platform that is fast, flexible, and developer-friendly, then Apache Spark is the answer! It has an in-memory data engine which means that it can perform tasks up to one hundred times faster than other datasets processing big data. It is one of the most preferred open-source analytics engines that is used by banks, telecommunications companies, games companies, governments, and all the major tech giants such as Apple, Facebook, IBM, and Microsoft.

To enroll, contact P2L today!

ForgeRock Access Management

The ForgeRock Course You Need To Be Successful


How Does ForgeRock Work?


The ForgeRock Identity Gateway creates a virtual perimeter around these apps, acting as a reverse proxy and making sure that they are authenticated and authorized. It enables organizations to be more secure and to more consistently enforce authorization across apps, APIs, and microservices using the latest industry standards.

 

If this is a software you want to work with or learn more about, then you’re at the right place. P2L is excited to announce that we’ll be offering one of the top ForgeRock courses just for you!

ForgeRock Access Management Core Concepts

To help students fully understand each of the topics discussed, this structured course combines instructor-led lectures and demonstrations with plenty of laboratory exercises. Having completed this course, students are prepared for designing, installing, configuring, and administering ForgeRock® Access Management (AM) solutions. The course will present the fundamentals of access management, demonstrate the various features of AM, and provide hands-on implementation experience that can be leveraged in a real-world environment.


The Prerequisites


In order to successfully complete this course, you must meet the following requirements:

  • Commands and editing of text in Unix/Linux
  • How HTTP works and how web applications work
  • Understanding how directory servers work
  • Having a basic understanding of REST
  • The ability to work in a Java environment would be beneficial. No programming experience is necessary.

Learning Objectives

Here are some of the key skills you should be able to demonstrate after completing this course:

  • Set up default authentication with AM
  • Control access to web agents
  • Allow users to self-register using the self-service feature
  • Using trees to configure intelligent authentication
  • Construct a store of identities
  • Using REST, retrieve user information
  • Configure access control policies

Beneficiaries


A successful ForgeRock AM deployment is closely associated with those who supervise various aspects of it. This includes, but is not limited to, those with the following responsibilities:

  • Integrators of systems
  • Consultants for systems
  • System Architects
  • Developers of software and systems
  • System Administrators


    Wait no more! Take advantage of this amazing course. Contact P2L to enroll today.