In this article we want to collect the most used terms in the field of cloud computing or cloud computing; from basic concepts such as the main types and models of cloud services to other more specialized terms related to cloud computing. But let’s start from the beginning…
What is cloud computing or cloud computing?
Cloud computing is a model for delivering computing services and resources – from applications to storage resources, networks and processing power – over the Internet, on demand and through a pay-as-you-go model. Therefore, in cloud computing, cloud service providers or ISPs are responsible for managing applications and computing resources, based on the cloud service model they offer. While users only pay for the services they use and scale when needed. Although the term “cloud computing” began to gain more prominence in the early 21st century, the concept of “computing as a service” was previously known by the term “virtualization.”
Cloud Computing Glossary
Network storage is a technology that allows storage capacity to be shared between servers or computers remotely over a network using a direct connection or protocols such as NFS, iSCI, etc.
Bandwidth is the maximum data transfer capacity of a network or Internet connection; That is, the amount of data that can be sent over a connection at one time. Bandwidth is usually measured in Megabits per second (Mb/s).
An application programming interface or API, short for Application Programming Interface, is a set of definitions and protocols used to develop and integrate application software. It allows software applications to communicate with each other using a set of rules.
A backup is a copy of your data that is made and stored in a different location so that in the event of data loss, you can recover and restore it. It’s a simple form of disaster recovery.
Load balancing is the process of distributing computing workloads across multiple servers. A load balancer distributes an application’s traffic across multiple servers to prevent a single application server from becoming a point of failure.
A bare metal server is a physical server with a single tenant; In other words, a physical server for the exclusive use of the hiring client and which is not shared with other organizations or users. Bare metal servers outperform traditional on-premise solutions and public cloud solutions.
CAPEX, short for Capital Expenditure, refers to the amount of money a company spends to acquire and improve assets. They can be tangible, such as hardware and facilities, or intangible, such as software licenses and intellectual property. Find out more about what CAPEX is.
A CDN, short for Content Delivery Network, is a network of geographically distributed servers and data centers to improve availability and performance by placing cached copies of content closer to end users. More details on what a CDN is.
A data center or data processing center (DPC) is the physical location where the necessary computing resources of an organization or Internet service provider are concentrated. More details on what a data center is and what it looks like.
Hybrid Cloud is a cloud computing deployment model that combines the dedicated computing resources of a private cloud for mission-critical data and applications with the shared resources of a public cloud to handle specific peaks in demand.
The term native cloud is used to describe applications designed and built to take full advantage of the cloud model: scalability, elasticity, resilience and flexibility.
The Private Cloud or private cloud is a cloud computing deployment model in which the computing resources and environment are for the exclusive use of an organization. A Private Cloud is comparable to having an organization’s data center, but with the advantages of delegating management and scaling on demand thanks to virtualization.
The Public Cloud or public cloud is a cloud computing deployment model in which an ISP offers computing resources over the Internet in an infrastructure shared between several organizations, on a pay-as-you-go basis.
A cluster is a collection of servers connected to each other via a network, which behaves in many ways like a single server.
A container is a virtualization instance that can run separate instances of an application using a single operating system. More details on containerization systems and container orchestrators.
A CPU, short for Central Processing Unit, is an electronic circuit within a computer that executes the specific instructions of each computer program through basic arithmetic, logic, control, and input/output operations. A CPU can be made up of multiple “cores”, i.e. several processing cores, so that a modern processor is capable of performing several tasks in parallel. Additionally, each core may be able to process multiple threads at the same time (Intel technology called “Hyperthreading”).
DataOps, short for Data and Operations, is a methodology used by data engineers, scientists and analysts to accelerate the development of applications based on Big Data, working together with the DevOps team. Improve quality and reduce data analysis cycle times.
DevOps, short for Development and Operations, is a methodology based on the collaboration of IT operations and software development teams with the goal of making developments and implementations as agile as possible.
DevSecOps, short for Development, Security and Operations, is an approach that integrates the security layer into the DevOps methodology from the beginning.
Edge computing is a distributed computing paradigm that involves bringing the computation and storage of data closer to the place where it is generated. By bringing compute resources closer to where the data originates, you can achieve numerous benefits, such as bandwidth savings and improved response times. More details on edge computing.
Network equipment or network hardware is an electronic device that enables the transmission and interaction of data between devices within a computer or server network.
Georedundancy involves replicating data and IT infrastructure across multiple remote data centers. Its goal is to protect data and minimize downtime to ensure normal business operations in the event of an unexpected service interruption. Therefore, it is especially relevant when it comes to mission-critical servers, data and applications. More details on georedundancy.
A GiB or GibiByte is a unit of measurement used for data storage. One GiB is equal to 230 bytes (1,073,741,824 bytes).
HA, short for High Availability, refers to an agreed level of above-normal operational performance.
The term hardware refers to the physical parts of a computer: CPU, monitor, memory, graphics card, etc.
Hyperconvergence is an IT infrastructure that aims to reduce data center complexity. Hyperconverged infrastructure (HCI) is a software-defined system that unifies all elements of a traditional data center: compute, storage, networking, and management. More details on hyperconvergence and HCI.
A hypervisor is a layer of virtualization software that allows you to create and run multiple virtual machines on a single server, as well as multiple operating systems. It is responsible for separating the virtual machine resources from the hardware system and distributing them appropriately. It is also known as virtual machine monitor (VMM) in English. VMware, ESXi and KVM are some examples of hypervisors. More details on hypervisors.
Housing is a type of data center where companies can rent space, electricity, broadband, air conditioning and physical security to house their IT equipment. Home centers also facilitate access to a wide range of network and telecommunications service providers. This model allows companies to maintain full control over their equipment, while the housing provider takes care of managing the data center.
IaaS, short for Infrastructure as a Service, is a cloud computing service model that consists of providing and managing computing resources, such as servers, storage or networks, over the Internet.
An IOPS, short for input/output operations per second, is a unit of measurement used to indicate the performance of computing storage devices or network storage volumes. This is important data because the higher the IOPS capacity, the higher the data writing or reading performance.
An ISP, short for Internet Service Provider, is a provider of Internet services such as cloud solutions, network storage or networking.
Latency or network latency is the time it takes for a data packet to transfer between a server and a user over a network.
A legacy system is an old or obsolete system, technology, or software application that remains in use in an organization because it cannot be easily replaced or updated, or because the organization simply does not want to. More details on what legacy systems are.
Get up and change
Lift-and-shift is an approach to cloud migration that involves moving an existing application, and its data, to a cloud infrastructure without redesigning or making major changes.
A virtual machine or VM, short for Virtual Machine, is a virtual environment created on a physical hardware system using a hypervisor and equipped with its own operating system, CPU, memory, network interface and storage. The physical machine running the VM is known as the “host”, and the VM emulated on the physical machine is known as the “guest”.
Middleware is a type of software used to connect software components and business applications.
Cloud migration or cloud migration is the process of moving, partially or completely, a company’s data, applications and services from an on-premises solution to a cloud environment. Discover some key elements to prepare for a secure migration to the cloud.
A hot migration is moving a system or virtual machine from one server or data center to another without disrupting it or interrupting its availability.
Shared responsibility model
The shared responsibility model defines that the private cloud provider and the customer share responsibilities for the security and protection of data in the cloud. In this model, the Internet Service Provider is responsible for cloud security and the customer is responsible for cloud security.
Monitoring involves using tools that extract detailed information from virtual and physical servers, storage, network, systems, databases, platforms, etc., to predict problems and proactively resolve them. More details on the importance of monitoring IT systems.
Multicloud is a cloud computing deployment model in which cloud services from multiple public and private cloud providers are combined to leverage the specific benefits of multiple providers.
A Network Operations Center or NOC is a centralized location where an organization’s network team monitors, manages, and optimizes networks and detects and resolves any security incidents that may occur. It is also known as the “Network Control Center” or CCR. More details on network operations centers.
OPEX, short for operating expenses, refers to the amount of money a company spends on services needed to keep the business running: supply costs, salaries, server maintenance, etc. Find out more about what OPEX is.
PaaS, short for Platform as a Service, is a cloud computing service model that provides a ready-to-use development environment over the Internet in which developers can develop, manage, deploy and test their software applications.
The PUE (Power Usage Effectiveness) index is used to measure the energy efficiency of data centers. PUE is the value that results from dividing the total amount of energy used by a data center installation by the energy supplied to the data center’s IT equipment.
Internet exchange point
An Internet exchange point, neutral point or IXP (Internet Exchange Point) is a physical infrastructure that facilitates the exchange of traffic between networks for ISPs. Neutral points allow networks to interconnect directly through their own infrastructure, rather than through third-party networks. More details on what an Internet exchange point is and its advantages.
RAM (Random Access Memory) is the main memory of servers and personal devices. Temporarily stores data from programs in use during a session.
Disaster Recovery is a method of restoring data and functionality after a system has been disrupted due to a disaster, whether natural or man-made. More details on what a disaster recovery plan is and how to create one.
Refactoring is a cloud migration approach that involves applying some changes and optimizations to an application so that it better adapts to the cloud environment.
Repatriation of the clouds
Cloud repatriation refers to the transfer of workloads and applications from hyperscale public clouds to other IT solutions: private cloud environments, home facilities, dedicated servers, or even smaller public cloud environments. More details on cloud repatriation.
SaaS, short for Software as a Service, is a cloud computing service model that involves distributing software applications hosted in the cloud to users over the Internet via a subscription or paid purchase model.
SD-WAN, short for Software Defined Wide Area Network, is a cloud-oriented approach to managing WAN networks. SD-WAN is designed to simplify and optimize branch network connectivity. More details on SD-WAN.
A server is a computer, device, or computer program that manages networking and computing resources and provides services and functionality to other computers, programs, and devices called “clients.”
An SLA, short for Service Level Agreement, is an agreement between the service provider (ISP) and the customer that includes specific aspects of the service in terms of quality, availability and responsibility. In the SLA, ISPs define the service level of the services contracted by the customer.
A snapshot is a copy of the state of a system at a specific point in time.
The concept of data sovereignty refers to whether the data processed by an organization is subject to the laws and regulations of the country or region in which it is located. More details on this concept in this article on data sovereignty and protection in the digital economy.
Resource oversubscription occurs when a shared hosting or public cloud provider offers more computing resources than available capacity, under the theory that customers are not using 100% of the resources offered. More details on resource oversubscription.
A SOC, short for Security Operations Center, is a centralized location where an organization’s security team monitors, analyzes, detects, and resolves any security incidents that may occur. Also known as ISOC, short for Information Security Operations Center. More details on what a SOC is.
The term software refers to the group of computer programs and libraries (and their data) that are stored and run on hardware.
Fault tolerance is a property that allows systems to avoid downtime. To do this, systems are replicated so that the standby system seamlessly takes over when the primary system fails. This implies complete hardware redundancy.