People use computers for various productivity, entertainment, and communication purposes. At the heart of a computer is the central processing unit (CPU). This hardware, paired with memory units and storage, processes and executes instructions requested by people. The problem is that people cannot communicate these requests directly to the computer hardware without some help. The liaison between people and CPUs is facilitated by the operating system (OS). The OS provides users and computers with fundamental concepts such as enabling the sharing and exchange of information by processes, managing memory resources, file system management, and protection and security.
Operating System
Features and Structures
The
major functions of an operating system can be categorized into the User
Interface, System Calls, and Services, which are further divided into user and
system services. Figure 1 illustrates these categories and their interactions.
Figure 1
Operating System Hierarchy of Subsystems and Components
The
user interface provides a way for people to interact with the operating system,
whether through a command line, graphical interface, or batch files. User
commands trigger system calls, which connect the interface to the underlying
services. Each category of system calls contains specific functions. For
example, File Management system calls include creation and deletion, opening
and closing, reading and writing, and attribute management.
Services
operate at two levels. User services include program execution, communication,
and file manipulation. System services run in the background and manage
resources, protection, and accounting. Overall, the operating system enables
users and applications to interact with hardware while coordinating commands,
services, and background processes.
Process Control
When
users use applications on a computer, the instructions are compiled into
processes that the CPU executes. A process is the execution of an application,
composed of a text section (program code), a data section (global variables), a
stack (parameters and local variables), and often a heap for dynamic memory
(Silberschatz et al., 2014). Once created, a process moves through states: new,
ready, running, waiting, and terminated. These transitions are managed by the
process control block (PCB), which records process details as shown in Figure 2.
Figure 2
Process States and PCB
Processes
may be single-threaded, executing sequentially, or multithreaded, where
multiple instruction streams improve throughput on multicore systems. Figure 3
shows the different threading models that include one-to-one, many-to-one, and
many-to-many, balancing resource use and parallelism.
Figure 3
Multithreading models
Memory Management
Memory
management is one of the operating system’s most important services, ensuring
efficient resource allocation and process execution. Applications reside on
long-term storage but must be loaded into main memory as processes for the CPU
to run. In multiprogramming environments, the OS dynamically allocates memory,
relocating processes as needed, protecting user and system spaces, supporting
logical organization, and enabling sharing.
Each
process is assigned a base and limit value to define its memory region. When
space is unavailable, the OS can swap processes in and out of memory, though
this may create fragmentation, as shown in Figure 4. Fragmentation reduces
efficiency but can be mitigated through compaction or by loading processes
non-contiguously in segments.
Figure 4
Process Logical Address Space, Swapping, and Fragmentation
If a
process exceeds physical memory, virtual memory breaks it into pages and loads
them as needed. Page faults may occur, requiring replacement strategies like
FIFO, OPT, or LRU. Effective memory management prevents crashes, improves
performance, and supports concurrent program execution.
File System
Management
Operating
system file system management is responsible for organizing, securing, sharing,
and efficiently storing data. Since data resides on secondary storage, it must
be mapped and retrieved quickly when requested by users or processes.
Protection
mechanisms ensure user data remains private, while permissions allow for
controlled sharing. To maintain efficiency, the OS manages fragmentation caused
by file creation and deletion, and preserves storage integrity by detecting and
replacing bad sectors. File system functions include creating and managing
files and directories, controlling access through permissions, and updating
file properties.
Efficient
disk scheduling algorithms, such as shortest seek time first (SSTF) and LOOK,
reduce delays compared to simpler methods like first-come, first-served (FCFS).
Silberschatz et al. (2014) stated that “either SSTF or LOOK is a reasonable
choice for the default algorithm” (p. 452). This is due to the improvements
they have over FCFS and the efficiency over C-LOOK. Performing the computations
to find the optimal schedule in C-LOOK can result in unnecessary overhead and
lower performance.
Directory
structures ranging from single-level to tree-structured and acyclic-graph organize
files and support sharing without redundancy, as shown in Figure 5. These
functions rely on the kernel’s I/O subsystem, which uses device drivers and
controllers to manage communication between hardware and software while
handling scheduling, errors, and device coordination.
Figure 5
Directory Structures
Protection and
Security
As
multiprogramming and shared systems have become standard, operating systems
must enforce protection to prevent unauthorized access to objects and
resources. One common method is access control lists (ACLs), though these can
grow large as users and resources increase. Domain-based protection offers a
scalable alternative, assigning capabilities to users or processes that define
their access rights without updating each object individually. These
capabilities are secured to prevent unauthorized migration into user-accessible
spaces. Silberschatz et al. (2014) explained that by securing capabilities, the
objects they protect will also be secured against unauthorized access.
Protection
mechanisms primarily address internal misuse, while security focuses on
defending against external threats like viruses, worms, buffer overflows, or
denial-of-service attacks, as shown in Figure 6. Security strategies include
encryption, strong authentication, and secure protocols. Because computers are
high-value targets, maintaining security requires continual monitoring and
patching of vulnerabilities, making it an ongoing and critical responsibility.
Figure 6
Security
Application of Operating
Systems Theory
Understanding
the fundamental concepts of operating systems directly connects to many technological
career paths. File system management principles are essential for ensuring data
security and storage efficiency, which database administrators must understand.
The protection and security of the OS, computers, and servers are a priority
for cybersecurity professionals. Process control and resource management are
important for software developers and enable them to improve application
efficiency and effectiveness. The fundamental functions of the OS are a vital part
of helping people to become more productive using computers.
Reference
Silberschatz, A.,
Galvin, P. B., & Gagne, G. (2014). Operating system concepts essentials
(2nd ed.). Retrieved from https://redshelf.com/



No comments:
Post a Comment