a
- Access Control List (ACL)
- An access control list (ACL) includes the rules used to govern access to digital environments.Organizations use two types of ACLs, filesystem ACLs and networking ACLs, to control traffic flow, grant or deny permissions, and monitor activity in and out of certain systems.
- Amazon Web Services (AWS)
- Amazon Web Services (AWS) is a third-party provider of public cloud computing services. The platform offers over 175 cloud-native services, including Big Data tools, database solutions, Internet of Things (IoT) applications, and more.
- Application Migration
- Application migration involves transferring applications from one computing environment to another. This can include moving apps from on-premises servers to the cloud or migrating them between different cloud platforms.
- Application Modernization
- Application modernization describes the process of updating legacy software with new capabilities and features to create incremental business value. Organizations typically modernize outdated applications through replatforming, refactoring, or rehosting efforts, which may involve significant changes to core architecture.
- Application Programming Interface (API)
- An application programming interface (API) enables disparate applications to communicate directly with one another according to predefined rules. There are many types of APIs, including Web APIs, Composite APIs, Internal APIs, Open APIs, and Partner APIs. The two most commonly referenced APIs are REST and SOAP APIs, both of which are Web APIs. Organizations use APIs to extend functionality to other systems and gain access to capabilities that fulfill unmet business requirements.
- Application Refactoring
- Application refactoring involves making significant changes to the configuration and source code of an existing application to align with business needs. Through refactoring, organizations can add new features, enhance performance capabilities, reduce costs, and more. Although refactoring does not change an application’s external behavior, it is a more complex process than replatforming or rehosting.
- Artificial Intelligence (AI)
- Artificial intelligence (AI) is a discipline within computer science that focuses on creating smart machines that can execute tasks that humans typically perform. Recently, advances in cloud computing technology have made AI capabilities more accessible. Organizations of all sizes can now build and deploy powerful AI programs that automate manual activities, reduce costs, and create new value.
- Auto-scaling
- Auto-scaling is a cloud computing function by which resources are allocated automatically to applications based on real-time demand. The emergence of cloud computing has enabled more organizations to take advantage of auto-scaling and optimize resource consumption across multiple cloud services.
- Availability Zones (AZs)
- Availability Zones (AZs) are isolated, logical data centers available to Amazon Web Services (AWS) customers. AZs come with independent cooling, networking, and power, enabling users to achieve redundancy for critical applications. By hosting applications across multiple AZs, organizations buffer their customers from performance issues and eliminate any single points of failure that would otherwise persist.
b
- Big Data
- Big Data describes the massive amount of information created in the world today with ever-increasing velocity. Organizations collect, store, and process Big Data through advanced data management techniques, many of which are available through the cloud. With Big Data analytics, organizations can extract valuable insights from structured, semi-structured, and unstructured datasets.
- Blockchain
- A blockchain is an open, immutable, and distributed digital record of information that promotes accountability and transparency amongst all parties. Although originally designed to support digital currencies, organizations use blockchain technology today for numerous applications.
- Business Intelligence
- Business intelligence (BI) refers to when corporations use data and analytics to discover new insights, improve decision-making, and create enterprise value. Modern BI practices rely on big data analytics, modern data infrastructure technologies, advanced visualizations, and nuanced reporting to gather and process information quickly at scale, with the hopes of identifying new growth opportunities.
c
- Cloud Application
- A cloud application is a web-based program that relies on the power of cloud computing and related capabilities for data storage, logic processing, and more. Processing for cloud applications is typically executed by local devices and cloud computing solutions. Users interact with cloud applications through Internet browsers.
- Cloud Automation
- Cloud automation describes the practice of automating cloud infrastructure management processes in line with IT resource demand. Cloud automation is commonly used by DevOps, security, and application development teams to free up engineering capacity for more complex aspects of cloud-native operations.
- Cloud Computing
- Cloud computing describes when computing services, such as data storage, networking, analytics, server hosting, etc., are delivered over the Internet. Cloud computing offers many advantages over on-premises computing, including lower operating costs, flexible resource allocation, and improved scalability.
- Cloud Infrastructure
- Cloud infrastructure is composed of hardware and software components that deliver cloud computing services over the Internet. These components include servers, storage, networking equipment, and virtualization technology.
- Cloud Migration
- Cloud migration is the process of moving on-premises IT infrastructure, including databases, applications, and other components, to the cloud. Migrations enable organizations to fulfill ever-evolving business requirements and take advantage of cloud computing capabilities. Cloud migrations can be highly complex endeavors that require significant planning and expertise to execute successfully.
- Cloud-native
- Software services, business applications, and IT systems that are cloud-native are explicitly designed to run in dynamic cloud environments. Whereas on-premises applications may need to be modernized for the cloud, cloud-native applications work immediately in cloud environments. They are also generally more agile and scalable than legacy technologies.
- Cloud Provisioning
- Cloud provisioning is what cloud providers, such as AWS and Microsoft Azure, do to deliver cloud resources and services to customers on an as-needed basis. Cloud provisioning is central to the on-demand nature of the cloud computing model, and represents a key advantage over traditional, more limited approaches to compute resource management.
- Cloud Service Provider
- Cloud service providers offer cloud computing services, networking, and infrastructure over the web. Organizations use third-party cloud service providers to outsource much of the effort associated with maintaining on-premises IT. Today’s leading cloud service providers offer cost-efficient and scalable data storage, analytical tools, and more, all through the Internet.
- Cloud Storage
- Cloud storage refers to the data storage model in which an organization’s data is maintained by a cloud provider that is responsible for storing, maintaining, and serving information from a remote repository. Cloud storage frees IT teams from having to set up or manage data infrastructure on-premises. It also paves the way for enterprises to use modern data architectures (e.g., data lakes) and advanced analytics.
- Cluster
- A cluster describes a group of computers or hosts that collectively work together to support a specific application or middleware software. In a cluster, individual computing devices are called “nodes,” and all nodes work on the same tasks. Clusters are commonly seen in high-performance computing (HPC) applications that require significant computing power.
- Compute
- In modern computing, compute refers to computational activities that require processing resources beyond what is available through internal memory. Organizations must be aware of their existing computing capacity and the computing power they need to support critical business activities.
- Containers
- Containers are software units that enable organizations to run their applications quickly and reliably in different computing environments. Containers group all runtime elements together, including code, system libraries, and settings, into lightweight and secure packages. Organizations use containers to decouple applications from their native environments so that they can be deployed easily and consistently anywhere.
- Content Delivery Network (CDN)
- A Content Delivery Network (CDN) is a group of geographically distributed servers that collaborate to deliver content over the web. CDNs enable organizations to rapidly transfer assets, such as HTML pages, stylesheets, images, and videos to end users. Today, CDNs deliver a vast majority of the world’s web content.
- Continuous Integration/Continuous Development (CI/CD)
- Continuous Integration and Continuous Development (CI/CD) refers to a set of practices used by DevOps teams to automate activities related to application building, testing, and deployment. Through CI/CD, DevOps teams can constantly innovate, deliver new features to market, and deploy updates in an iterative fashion. CI/CD is considered a best practice in modern cloud computing.
d
- Database
- A database is a collection of data that is stored digitally in a computing system. Traditionally, databases were used to store structured information, although modern cloud-based databases enable organizations to store semi-structured and unstructured data. Organizations typically use database management systems to retrieve, manipulate, and manage their data.
- Database Instance
- A database instance refers to a complete database environment and its components, which may include database management software, specific procedures, predefined table structures, and other features. It’s not uncommon for administrators to create multiple database instances, each with a unique purpose for the organization.
- Data Architecture
- Data architecture refers to the resources and tools used to ingest, store, and move data across cloud environments. Data architecture includes things like real-time data ingestion pipelines that pull information from IoT devices in the field or data lakes that store large volumes of structured and unstructured data.
- Data Engineering
- Data engineering refers to the work that happens on data to prepare it for complex data science after it’s been ingested and consolidated on the cloud. Data engineering often involves heavy data processing, ETL, analytics, and visualization.
- Data Governance
- Organizations that rely on data for business growth need a strategy to manage that data. Data governance is the processes, policies, and standards that an organization uses to keep its data secure and private.
- Data Lake
- A data lake is a centralized, digital repository capable of storing both structured and unstructured data. Data lakes are highly scalable, making them valuable for Big Data analytics and applications. They can also ingest information from on-premises sources or real-time streams.
- Data Migration
- Data migration describes the process of moving data permanently from one type of computer storage to another. Though seemingly straightforward, data migrations often require organizations to transform their data in preparation for the new storage system. The purpose of a data migration is typically to increase data management efficiency, performance, flexibility, or security. A common practice today is for organizations to migrate their data from on-premises to a cloud provider, like AWS.
- Data Pipeline
- Data pipelines are used to streamline the processes involved in moving information from one location to another. For example, data pipelines can automatically extract, transform, load, combine, and validate data for further processing. At a time when organizations are collecting more information than ever, data pipelines help eliminate errors and bottlenecks.
- Data Science
- Data science involves analyzing data to uncover valuable insights that drive business decisions. It takes a multidisciplinary approach, integrating concepts and techniques from mathematics, statistics, artificial intelligence, and computer engineering to process and interpret large datasets.
- Data Stream
- A data stream is a sequence of digital signals that carries information to or from a data provider. Data streams typically contain raw data that can be processed, analyzed, stored, and applied to support modern applications, advanced analytics, and other use cases common to technology companies, researchers, and enterprises that collect high volumes of data.
- Data Warehouse
- A data warehouse is a centralized repository designed to store structured and semi-structured data for reporting and analytical purposes. It aggregates information from various sources, such as point-of-sale systems, applications, and relational databases.
- Deep Learning
- Deep learning is a type of machine learning that teaches computers to process data in a way that is inspired by the human brain. It’s used to analyze large, complex datasets, complete nonlinear tasks, and respond to inputs faster and more accurately than humans.
- DevOps
- DevOps encompasses the practices, tools, and philosophies regarding how to deliver software applications and services quickly to customers. The DevOps model enables organizations to innovate rapidly and launch tailored offerings by empowering development and operations teams to work together across the lifecycle of applications.
- Docker
- Docker is a widely used technology for building and deploying containers. Docker containers simplify the complexities associated with packing, shipping, and running applications in any computing environment.
- Docker Image
- Docker images are software packages that include all components needed to run an application. Images contain critical information about how various software components will be executed, as well as how containers will be instantiated.
e
- Edge Computing
- Edge computing is the process of bringing information storage and computing abilities closer to the devices that produce that information and the users who consume it. This improves response time on remote devices and allows organizations to get timely insights from device data.
- Elastic Computing
- Elastic computing refers to a system’s ability to scale processing, memory, and storage capacity with changes in demand. Organizations that implement elastic computing don’t have to worry about capacity planning or peak usage scenarios. Instead, they can trust their IT infrastructure to acquire computing resources dynamically.
- Endpoint
- An endpoint is a remote computing device or node that communicates and receives information across a network. Endpoints can be data terminals, host computers, modems, bridges, and other commonly used computing infrastructure. Endpoints are particularly valuable in IoT and smart applications that depend on “edge” devices to gather information from the surrounding environment which can then be used to support new applications, offerings, or business models.
- Extract, Transform, and Load (ETL)
- Extract, transform, and load (ETL) describes the process of ingesting and integrating data from diverse sources into a single, consolidated data store. ETL is particularly important today for organizations that gather information from remote endpoints and edge devices that may not share the same data management protocols. For organizations that aim to leverage big data analytics and AI/ML, ETL is a crucial step in the early stages of the data pipeline.
g
- Generative AI (Gen AI)
- Generative AI (or Gen AI) refers to artificial intelligence technologies that have the capacity to create new content, ideas, or solutions, typically based on patterns, rules, or learned examples from input data. Utilizing machine learning models, this technology learns from existing data to generate new instances that are similar but not identical to the original data.
- Google Cloud
- Google Cloud is a third-party provider of public cloud computing services. Launched in 2008 by Google, Google Cloud Platform offers a variety of cloud solutions for data management, infrastructure modernization, smart analytics, and more.
h
- Hadoop
- Hadoop from Apache is an open source framework that enables organizations to store massive volumes of data in an efficient manner. The framework also facilitates clustering so that engineering teams can quickly analyze large datasets in parallel. Hadoop includes four modules: the Hadoop Distributed File System (HDFS), Yet Another Resource Negotiator (YARN), MapReduce, and Hadoop Common.
- High Availability
- In the computing world, high availability refers to the quality of an application or infrastructure to continue performing despite disruptions. Highly available systems use redundant hardware and software to minimize service interruptions and mitigate single points of failure. When failures do occur, highly available infrastructure relies on failover processes and backups to maintain operations.
- Hosted Application
- A hosted application is software that runs on third-party infrastructure rather than on-premises. Hosted applications can be accessed from anywhere in the world through the Internet. In the age of cloud computing, more organizations are using hosted applications to minimize the complexities and costs of maintaining on-premises infrastructure.
- Hybrid Cloud
- Hybrid cloud refers to a computing environment that uses a combination of private and public cloud services or on-premises infrastructure. Organizations use the hybrid cloud approach to optimize IT architecture around digital transformation goals. For example, a company might use a public cloud provider for its on-demand cloud resources, a private cloud for security purposes, and on-premises infrastructure for compliance reasons.
i
- Infrastructure-as-a-Service (IaaS)
- Infrastructure-as-a-Service (IaaS) is one of the primary types of cloud services that provide users with instant computing, storage, and other IT infrastructure delivered through the Internet. IaaS solutions typically scale with demand, allowing organizations to pay only for what they use. Doing so minimizes the complexity of having to purchase and manage on-premises infrastructure.
- Internet of Things (IoT)
- Internet of Things (IoT) refers to the collective mass of physical devices that can connect to the Internet and communicate with one another. Through the IoT, organizations can automate the information-gathering process and use the intelligence to improve their products, create new sources of value, and deploy tailored pricing.
j
- JavaScript Object Notation (JSON)
- JavaScript Object Notation (JSON) is a data interchange format that makes it easy for companies to store and transport data across the web in a way that both humans and machines can understand. JSON represents data in two ways: through key-value pairs and as arrays representing ordered collections of values. Because of JSON’s popularity, the format is a common output for APIs and data that gets sent from a server to a web page.
l
- Load Balancing
- Load balancing is the process of spreading network traffic over multiple servers to ensure that no one server is entirely responsible for supporting an application. Through load balancing, organizations can distribute processing resources as needed to improve the performance and responsiveness of modern applications. Load balancing techniques include Round-robin, Least Connection, Resource Based, Weighted Response Time, and more.
m
- Machine Learning
- Machine learning is a branch of artificial intelligence concerned with building smart computer algorithms that improve over time. Organizations use machine learning to identify patterns in massive datasets and use those insights to enhance performance. Machine learning is responsible for many software services today, including recommendation engines, social media feeds, and voice assistants.
- Managed Service Provider (MSP)
- A managed service provider (MSP) is a third-party company that provides ongoing services to help organizations maintain their IT infrastructure. MSPs generally offer network, security, and application support services through an existing data center or another third-party IaaS provider.
- Management and Governance
- In the cloud computing world, Management and Governance refers to implementing adequate protections and oversight for IT infrastructure. Through management and governance, organizations monitor the integrity of their applications, perform audits, analyze resource consumption, manage costs, and more.
- Microservices
- Microservices describes a method of software development that aims to compartmentalize application functions so that they can deploy, run, and scale independently. Unlike monolithic applications, microservices are loosely coupled and flexible when it comes to implementing updates or fixing errors.
- Microsoft Azure
- Microsoft Azure is a public cloud computing platform released in 2010. Commonly referred to as Azure, the service enables organizations to build, test, launch, and manage modern applications hosted in Microsoft-managed data centers.
- Middleware
- Middleware refers to software that sits between various applications and the operating system in use. The purpose of middleware is to allow for seamless communication, functionality, and data management across diverse systems without detracting from the user experience.
- Multicloud
- Multicloud describes the circumstance in which an organization uses more than one cloud vendor for the same type of cloud deployment. For example, a company might use one public cloud service for its on-demand computing needs but a different public cloud provider for a unique application that fulfills a specific business need. Many organizations implement multicloud deployments to gain redundancies and avoid vendor lock-in.
- Multi-tenant
- Multi-tenant architecture, or multitenancy, refers to a type of software architecture commonly used in cloud computing to deploy several single instances of software from one physical server. Through multitenancy, organizations can securely and dynamically serve multiple customers on one server via independent instances.
o
- On-premise
- On-premise refers to IT infrastructure that organizations manage onsite. This includes hardware and software that could otherwise sit on a remote server farm or be provided by a cloud services vendor. Typically, on-premise IT requires more effort to maintain from an operational standpoint.
- OpenStack
- OpenStack is a free cloud computing platform developed by Rackspace Technology that allows organizations to manage public and private cloud environments via Infrastructure-as-a-Service (IaaS) deployments. Initially released in 2010, OpenStack is now one of the top three open source projects in the world based on current activity.
- Orchestration
- In computing, orchestration describes the process of scheduling and integrating automated tasks across disparate systems. Organizations can orchestrate workflows between on-premises and cloud infrastructure, as well as streamline the execution of complex, interconnected workloads.
p
- Platform-as-a-Service
- Platform-as-a-Service (PaaS) refers to a cloud-based service model in which providers like AWS offer all hardware and software needed for organizations to develop and deploy modern applications. PaaS frees companies from having to manage their own servers and infrastructure, creating additional capacity to focus on delivering modern applications.
- PostgreSQL
- An open-source relational database that supports both SQL (relational) and JSON (non-relational) querying. It’s used as the primary data store for many web, mobile, geospatial, and analytics applications.
- Private Cloud
- A private cloud describes a cloud environment and resources that are used exclusively by a single organization. Private clouds may be deployed from an on-site data center or hosted by a third-party managed services provider. The advantage of using a private cloud is that organizations can customize management, governance, and other operating elements to their unique needs.
- Public Cloud
- A public cloud describes a cloud environment that is owned and operated by a third-party provider. Public cloud resources are delivered over the Internet to “tenants” that all share hardware, storage, and network devices. The advantage of using a public cloud provider is that organizations don’t have to purchase or maintain critical IT infrastructure.
r
- REpresentational State Transfer (REST)
- REpresentational State Transfer (REST) is an architectural model that lays out standards for how stateless computing systems should communicate with one another across the web. In RESTful systems, client and server implementations operate independently, allowing them to evolve and scale more seamlessly, which is a crucial feature of the World Wide Web. Also within the context of RESTful systems, REST APIs, or RESTful APIs, are rules-based interfaces that enable applications and devices to communicate with one another according to RESTful design principles. REST APIs are particularly useful for facilitating connections within microservices architectures.
s
- Scalability
- In computing, scalability refers to the ability of an application, network, organization, or process to adjust up and down quickly with demand. For example, scalable applications can support rapid increases in utilization, thus providing high-quality experiences for users regardless of network traffic or resource demand.
- Schema
- In computing, schemas typically describe how data is structured in databases or XML files. Database schemas refer to the tables and fields that help organize data. These schemas are often represented as visual diagrams. XML schemas highlight what data is included in XML files and provide a structure for that information.
- Security, Identity, and Compliance
- In cloud computing, security, identity, and compliance are concerned with securing workloads and applications adequately in the cloud. Typical priorities across these areas include protecting data, managing permissions, safeguarding infrastructure, monitoring cyber threats, and maintaining data privacy compliance.
- Serverless Computing
- Serverless computing is a cloud computing approach in which users rely on third-party providers to dynamically allocate machine resources from their own services. Organizations pay only for the computing resources they use without having to manage, provision, or maintain any servers themselves.
- Shared Security Model
- A Shared Security Model is a framework that helps cloud service providers (CSPs) and customers determine how to divide up security responsibilities. There are different types of Shared Security Models depending on how much operational support the CSP provides. Shared Security Models cover everything from hardware and infrastructure to data, the network, and all endpoints.
- Software-as-a-Service (SaaS)
- Software-as-a-Service (SaaS) is a software delivery model by which vendors license access to their data and applications through the Internet. Generally, SaaS vendors host and maintain their own code, databases, and services. Customers then pay for on-demand access, enabling them to fulfill certain business requirements without building something in-house or committing to long-term contracts.
- Storage
- In the cloud computing world, storage refers to digital space that organizations lease from third-party cloud vendors. With cloud storage, organizations don’t have to purchase or maintain storage infrastructure themselves. Instead, they can rely on vendors to manage capacity, security, and more, paying only for what they use.
v
- Vendor Lock-in
- Virtualization refers to technology that organizations use to deploy virtual instances of something abstracted from physical hardware. Through virtualization, organizations can use their IT infrastructure more efficiently by distributing capacity that would otherwise go unused across different tenants or environments.
- Virtualization
- Vendor lock-in refers to when customers of a service are locked into a relationship with a legacy vendor regardless of the quality of the service they are receiving. Customers may be unable to break out of these contracts for various reasons, including there being explicit contractual stipulations or financial repercussions associated with switching vendors.
- Virtual Machine
- A virtual machine is a digital computing environment that behaves like a physical computer. Virtual machines use software, rather than hardware, to run apps and programs, enabling developers to test applications in isolated environments.
- Virtual Private Cloud
- A virtual private cloud is an isolated environment with access to on-demand computing resources within a broader public cloud environment. Organizations use virtual private clouds to gain privacy and control over their data, applications, and code without sacrificing scalability and other advantages of using public cloud platforms.