Blogs  

Data Structures in C: Arrays, Linked Lists, Stacks, Queues

Data Structures in C: Arrays, Linked Lists, Stacks, and Queues Explained Clearly

Introduction

Every efficient software system depends on how well it handles data. Data structures define the way data is stored, organized, accessed, and updated in memory. Even the most advanced algorithm can perform poorly if it relies on an unsuitable data structure. This is why understanding data structures is essential for building reliable and scalable applications.

The C programming language is particularly powerful for learning data structures. It provides direct control over memory, pointers, and performance, making it ideal for system-level and high-efficiency applications. Among the many data structures available, arrays, linked lists, stacks, and queues form the foundation of most real-world software systems.

This article explains these four structures from a conceptual perspective. Instead of focusing on syntax or code, it emphasizes how each structure works, where it is used, and why it matters in real applications.

1. Understanding Data Structures

A data structure is a systematic way to store and manage data so that it can be used effectively. The choice of data structure directly affects program speed, memory usage, and overall design simplicity.

Data structures help in:

  • Organizing memory efficiently

  • Improving algorithm performance

  • Managing large and complex data

  • Supporting dynamic and real-time operations

  • Building scalable systems

In C, data structures are implemented close to hardware, which makes them highly efficient and predictable. This is one reason C remains popular in operating systems, embedded systems, and performance-critical software.

2. Arrays: A Foundation for Data Storage

An array is one of the most basic and commonly used data structures. It stores elements of the same data type in consecutive memory locations.

Key characteristics of arrays include:

  • Fixed size defined at creation

  • Direct access to any element

  • Fast data retrieval

  • Efficient memory layout

Arrays are ideal when the number of elements is known in advance and does not change frequently.

3. Internal Working of Arrays

Arrays occupy a continuous block of memory. The position of each element is calculated using its index and the base address of the array. Because the location of any element can be computed instantly, accessing array elements is extremely fast.

This constant-time access makes arrays suitable for applications that require frequent reading of data, such as lookup tables and indexed records.

4. Drawbacks of Arrays

Despite their speed, arrays have important limitations:

  • Their size cannot change dynamically

  • Inserting new elements is costly because shifting is required

  • Deleting elements creates unused gaps

  • Extra memory may be wasted if size is overestimated

  • Overflow can occur if size is underestimated

These limitations reduce flexibility and make arrays less suitable for applications with unpredictable data growth.

5. Linked Lists: Dynamic Data Management

A linked list is a collection of elements called nodes, where each node stores data and a reference to another node. Unlike arrays, linked list nodes are not stored next to each other in memory.

This structure allows the list to grow or shrink dynamically. New elements can be added or removed without reallocating memory, making linked lists suitable for dynamic environments.

6. How Linked Lists Function

Each node in a linked list consists of:

  • A data field

  • A pointer to the next node

The list begins with a pointer known as the head. To access an element, the program follows the chain of pointers from one node to the next. Because nodes are accessed sequentially, there is no direct indexing.

While traversal takes time, insertion and deletion are efficient since only pointer references need to be updated.

7. Benefits of Linked Lists

Linked lists provide several advantages:

  • Flexible size

  • Efficient insertion and deletion

  • No memory wastage due to unused space

  • Suitable for unpredictable data sizes

They are widely used in operating systems for tasks like memory management, process scheduling, and file handling.

8. Limitations of Linked Lists

Linked lists also come with disadvantages:

  • No direct or random access

  • Slower data retrieval

  • Extra memory required for pointers

  • More complex debugging

  • Poor cache performance

Because of these factors, linked lists are not ideal when fast indexing is required.

9. Stacks: Controlled Data Handling

A stack works with two fundamental actions:

  • Placing an element on the top

  • Removing the element from the top

Because access is tightly controlled, stacks are easy to manage and highly efficient for specific tasks.

10. How Stacks Work Conceptually

Stacks operate in one direction only. Elements below the top cannot be accessed directly.

This limitation results in:

  • Consistent and predictable behavior

  • Very fast insertion and removal

  • Clean handling of temporary or short-lived data

11. Practical Uses of Stacks

Stacks play a vital role in many systems, including:

  • Managing function calls during program execution

  • Evaluating mathematical and logical expressions

  • Supporting undo and redo features in applications

  • Implementing backtracking solutions

  • Validating syntax in compilers

  • Handling temporary memory allocation

When a function completes execution, its associated data is automatically cleared from the stack.

12. Benefits of Using Stacks

Stacks provide several advantages:

  • Straightforward structure

  • High-speed operations

  • Minimal memory overhead

  • Well-defined access rules

Their simplicity makes them dependable in scenarios where strict control is required.

13. Drawbacks of Stacks

Despite their usefulness, stacks have limitations:

  • Only the top item can be accessed

  • Direct access to middle elements is impossible

  • Exceeding capacity can cause stack overflow

14. Queues: Sequential Data Flow

Queues mirror real-life waiting systems. They support two main operations:

  • Inserting elements at the back

  • Removing elements from the front

15. Working Principle of Queues

A queue has two distinct ends:

  • Front – where elements are processed and removed

  • Rear – where new elements are added

This structure ensures orderly and fair processing.

16. Common Uses of Queues

Queues are widely used in systems such as:

  • Processor scheduling

  • Task and job management

  • Printer task handling

  • Network packet transmission

  • Event-driven applications

  • Messaging systems

Any application that processes requests in sequence relies on queues.

17. Comparison of Arrays, Linked Lists, Stacks, and Queues

Each structure serves a specific purpose:

Arrays

  • Provide quick data access

  • Have a fixed size

  • Ideal when the data size is known in advance

Linked Lists

  • Grow and shrink dynamically

  • Access is slower compared to arrays

  • Efficient for frequent insertions and deletions

Stacks

  • Follow the Last In, First Out principle

  • Useful for nested operations and execution control

Queues

  • Follow the First In, First Out principle

  • Ideal for scheduling and fair processing

Choosing the right structure depends on the problem being solved.

18. Role in Real-World Software Systems

These data structures form the foundation of modern computing:

  • Operating systems depend on queues for process management

  • Compilers rely on stacks for syntax analysis

  • Databases use arrays to build fast indexes

  • Networks use queues to manage data flow

  • Memory managers use linked lists to track allocation

They operate behind the scenes but are essential to system performance.

19. Final Thoughts

Arrays, linked lists, stacks, and queues represent four core strategies for storing and processing data. Arrays prioritize speed, linked lists offer adaptability, stacks control execution order, and queues maintain fairness.

A solid understanding of these structures enables developers to build efficient, scalable, and maintainable programs. For anyone serious about C programming, mastering data structures is a must not an option.

To gain a deep, practical mastery of these essential data structures and their implementation in C, our C Language Online Training Course provides structured, hands-on learning. For a broader curriculum that integrates these concepts into full-stack development, explore our Full Stack Developer Course.

Frequently Asked Questions

1.What is a data structure in C?
A data structure in C defines how information is arranged in memory to enable efficient operations.

2.Why are arrays widely used?
Arrays provide fast access because their elements are stored in contiguous memory locations.

3.When should linked lists be used?
Linked lists are ideal when data size changes often and dynamic memory allocation is needed.

4.What characterizes a stack?
A stack operates on the Last In, First Out principle, where the most recent entry is processed first.

5.What characterizes a queue?
A queue follows the First In, First Out rule, ensuring elements are handled in the order they arrive.

Dynamic Memory Allocation in C: malloc, calloc, free

Dynamic Memory Allocation in C: malloc, calloc, free

Introduction

Every program needs memory. Some programs need more memory during execution. Others may need less. In real-world applications, memory requirements are not always fixed. Data may grow, shrink, or change based on user input, file content, or system behavior. Static memory allocation cannot handle these situations efficiently. Dynamic memory allocation in C Languagae solves this problem.

The program decides how much memory is needed and asks the operating system to provide it. Later, if the memory is no longer required, it can be released. This approach makes programs flexible, efficient, and scalable.

This article explains the key concepts of dynamic memory allocation in simple language using the most important functions: malloc, calloc, and free. No coding is discussed, only concepts. The objective is to provide a clean mental model that removes fear and confusion.

1. What Is Dynamic Memory Allocation

Dynamic memory allocation means requesting memory from the operating system while the program is running. Instead of using only pre-defined variables, the program can ask for additional space. This approach allows the size of data to be determined during execution. The program is not limited by the number or size of variables defined at compile time.

Dynamic memory allocation is essential in situations where:

  • Amount of data is unknown in advance

  • Data size changes while running

  • Large datasets must be handled

  • Memory must be managed efficiently

  • Temporary storage is required

This flexibility is the reason dynamic memory allocation is used in many systems, algorithms, and applications.

2. Why Static Allocation Is Not Enough

Static memory allocation assigns memory at compile time. This means the size and number of variables must be known ahead of time. For simple programs, this works. But most real applications deal with unknown data. Examples include:

  • Reading a file of unknown length

  • Receiving input from a user

  • Handling growing lists of items

  • Working with network packets

  • Allocating memory for dynamic structures

Static allocation cannot adjust to these situations. It either wastes memory or fails when data exceeds limits. Dynamic memory allocation solves this problem.

3. The Role of Heap Memory

C uses two major memory regions:

  • Stack

  • Heap

The stack stores local variables. Its size is fixed, and memory is released automatically. The heap is a large region of memory reserved for dynamic allocation. When the program needs extra space, it asks the heap. If the heap has enough space, memory is provided. This memory stays allocated until the program releases it.

Understanding the heap is important because dynamic allocation works entirely in this region.

4. How Dynamic Allocation Works Conceptually

The process of dynamic memory allocation involves three actions:

  1. Request memory

  2. Use the memory

  3. Release the memory

The operating system handles the actual allocation. The program receives a reference, usually an address, pointing to the allocated block. The program uses this reference to store or access data. When the block is no longer needed, it must be released. If not released, memory leaks occur.

Memory leaks cause:

  • Increased memory usage

  • Performance issues

  • Program crashes in worst cases

Dynamic allocation gives power but also responsibility.

5. Understanding malloc

It stands for memory allocation. Its purpose is simple: request a specific amount of memory from the heap. The operating system provides the memory if available. The function returns an address pointing to the block.

The block contains raw memory. It is not initialized. Whatever bits were previously in that area remain. The programmer must assign or overwrite values.

malloc is efficient, lightweight, and frequently used. It is ideal when the program knows exactly how many bytes are required.

6. Understanding calloc

calloc stands for contiguous allocation. It also requests memory, but with two important differences:

  • Memory returned is initialized to zero

  • Space requested is for multiple elements of equal size

calloc is useful when a clean block of memory is required. Instead of random data, it guarantees that all bytes are zero. This prevents unpredictable behavior. calloc is often used when preparing memory for arrays or tables.

The memory allocated is continuous. This ensures elements are stored next to each other, which simplifies iteration and searching.

7. Comparison Between malloc and calloc

Although both allocate memory dynamically, they differ in behavior.

malloc:

  • Allocates memory only

  • Does not initialize data

  • Faster in many cases

  • Suitable when initialization is not required

calloc:

  • Allocates and initializes memory

  • Sets all bytes to zero

  • Useful for arrays and clean objects

  • Slightly slower due to initialization

Choosing between them depends on whether initialization is necessary. Both return references to memory blocks. Both require release later.

8. Understanding free

free releases memory previously allocated. It returns the memory to the system. After calling free, the block is no longer reserved. If the program tries to use the memory after releasing it, errors occur.

free is important because the operating system does not automatically reclaim dynamic memory. If the program allocates many blocks but never releases them, memory is slowly consumed. Eventually, the program or even the system may run out of memory.

Calling free at the correct time prevents waste and improves performance.

9. What Happens if Memory Is Not Released

Failing to release memory leads to memory leaks. A memory leak is a situation where allocated memory is never returned. The program may run normally at first, but memory usage grows over time.

Memory leaks cause:

  • Slower performance

  • Higher memory consumption

  • Poor system stability

  • Application crashes

This is common in long-running programs such as:

  • Servers

  • Embedded systems

  • Background services

Memory leaks must be avoided through correct use of free.

10. How free Works Internally

When free is called, the operating system marks the block as available. The block is not deleted or erased. Instead, the system marks it for future reuse. The program should not access this block again. If it does, undefined behavior occurs.

Understanding this concept helps avoid mistakes. Freeing a block does not mean the data disappears. It simply means the program cannot rely on it.

11. Pointer Role in Dynamic Allocation

Dynamic memory allocation returns a pointer. This pointer acts as a reference to the block of memory. Without pointers, dynamic allocation cannot exist. Everything about dynamic memory involves pointers.

A pointer:

  • Stores the location of the block

  • Allows access to the data

  • Is required when releasing memory

Pointers and dynamic allocation are inseparable. Any misunderstanding of pointers leads to errors. A clear mental model is necessary.

12. Memory Fragmentation

As programs allocate and free memory, gaps appear in the heap. These gaps may be too small for some requests. This is called memory fragmentation. It can reduce available memory even when the total free space is large.

Fragmentation is common in:

  • Complex systems

  • Long-running applications

  • Dynamic structures that change frequently

Good allocation strategies and periodic cleanup reduce fragmentation.

13. Dynamic Memory for Data Structures

Many advanced data structures rely on dynamic memory:

  • Linked lists

  • Trees

  • Graphs

  • Hash tables

  • Queues and stacks

These structures grow and shrink during execution. Their size is not fixed. Static allocation cannot manage them. Dynamic allocation supports flexible modeling of structures.

The program allocates memory for each node or element when needed. When removed, the memory is released. This matches real-world behavior.

14. Why Dynamic Allocation Matters in Real Applications

Real systems deal with variable data. The amount of data arriving through input, files, or networks is unpredictable. Dynamic allocation allows programs to adapt. It prevents waste and allows scaling.

Examples:

  • Web servers receiving connections

  • Databases processing rows

  • Games storing objects

  • Scientific tools reading datasets

Dynamic memory allocation enables efficient resource use.

15. When Not to Use Dynamic Allocation

Dynamic allocation should not be used when:

  • Data size is known in advance

  • Static arrays are enough

  • Performance must be predictable

  • Small programs do not require flexibility

Dynamic allocation introduces complexity and responsibility. If not needed, simpler approaches are better.

16. Common Problems in Dynamic Memory Allocation

Beginners often face mistakes:

  • Using uninitialized pointers

  • Forgetting to release memory

  • Releasing memory twice

  • Accessing memory after free

  • Allocating too much or too little

These issues come from misunderstanding pointers or lifecycle of memory. Careful design, testing, and checking prevent errors.

17. Best Practices

To use dynamic memory safely:

  • Allocate only what is required

  • Always release memory when done

  • Set references to null after free

  • Check for allocation success

  • Avoid excessive allocation

Good habits prevent leaks and crashes. Programs become reliable and efficient.

18. Performance Considerations

Dynamic allocation is more expensive than static allocation. The operating system must find suitable blocks. Allocation and release take time. Too many allocation calls can reduce performance. Frequent small allocations create fragmentation.

Efficiency improves when:

  • Blocks are reused

  • Allocation size is predictable

  • Buffering strategies are used

Balanced use increases speed.

19. Dynamic Allocation in Long-Running Systems

Servers, services, and embedded systems may run for months or years. Memory leaks accumulate. Dynamic allocation must be used carefully. Regular monitoring, testing, and profiling detect leaks.

Clean release and recycling of blocks maintain stability. Good memory management is essential in critical systems.

20. Summary

This makes software flexible and scalable. malloc allocates memory. calloc allocates and initializes memory. free releases previously allocated memory. Together, these functions give fine-grained control over how programs use memory.

Dynamic allocation is powerful but must be handled with care. Memory leaks, fragmentation, and incorrect pointer usage can cause problems. Understanding concepts clearly prevents errors. Dynamic memory is essential in real-world applications that handle variable data, dynamic structures, and long-running tasks.

To master these and other critical C programming concepts through structured, hands-on learning, explore our C Language Online Training Course. For a broader development path that incorporates systems-level programming, our Full Stack Developer Course offers comprehensive training.

Frequently Asked Questions

1.What is dynamic memory allocation in C?
It is the process of requesting memory at runtime from the heap and releasing it when no longer needed.

2.What is the difference between malloc and calloc?
malloc allocates memory only. calloc allocates and initializes all bytes to zero.

3.Why is free necessary?
free releases memory back to the system. Without free, memory leaks occur.

Where does dynamic memory come from?
Dynamic memory is allocated from the heap, a region separate from the stack.

4.What causes memory leaks?
Allocating memory without later releasing it causes memory leaks, leading to high memory usage and crashes.

5.Is dynamic allocation faster or slower?
Dynamic allocation is slower than static allocation, but more flexible. It is used when data size cannot be predicted.

File Handling in C Programming: Read, Write, Append

File Handling in C Programming: Read, Write, Append

Introduction

Files are an essential part of every software system. Whether it is a small utility program or a large database application, storing information permanently is a common requirement. Without files, all data processed by a program would disappear once the program stops running. File handling in C enables programs to create files, store data, retrieve information, modify existing content, and manage long-term storage. Understanding how file handling works is important for anyone who wants to build real applications using C Language.

This article explains file handling in C using clear, simple language without coding. The focus is on concepts, real-world meaning, and how operations behave behind the scenes. The three core operations discussed are reading from files, writing to files, and appending data to existing files. These operations form the basis of most file-related tasks in programming.

1. What Is File Handling in C

File handling is the process of working with files stored on disk. A file is a named area in storage that holds data. Programs can use files to remember information permanently. Unlike variables, which disappear when the program ends, data stored in files persists. This enables programs to save results, settings, logs, records, and documents.

C provides a library that allows programmers to open files, read from them, write into them, and close them after use. The library handles communication between the program and the disk, so the programmer only deals with conceptual operations.

2. Why File Handling Is Important

File handling is important because many real-world systems must preserve information. Examples include:

  • Logging errors or messages

  • Saving user data

  • Storing configuration

  • Recording results

  • Handling input and output from external sources

Programs often run multiple times. Each time they may need information created during previous runs. File handling makes this possible.

A program without file handling can only work with temporary data. Once it ends, all values are lost. File handling converts temporary results into permanent information.

3. Types of File Operations in C

File handling in C involves several kinds of operations:

  1. Opening a file

  2. Reading data from a file

  3. Writing data into a file

  4. Appending new content to an existing file

  5. Closing the file

Opening a file establishes a connection between the program and the storage device. Closing a file ends that connection. Between these two steps, the program may read, write, or append content.

Understanding these operations is essential to work with files in C.

4. How Files Are Stored on Disk

To understand file handling, it is helpful to visualize how files live on the storage device. A disk is divided into blocks. A file is stored by allocating one or more blocks. Each block contains part of the file data. The file has a name and an internal structure. When a program opens a file, the operating system finds these blocks and allows the program to read or write.

A file can contain any kind of data:

  • Text

  • Numbers

  • Binary data

  • Documents

  • Logs

The program reads or writes bytes in sequence. The system takes care of locating the blocks.

5. How a Program Opens a File

Before any work can be done, the program must open the file. Opening a file creates a link between the program and the file stored on disk. The program specifies the name of the file and the mode of operation. The mode indicates what the program wants to do: read, write, or append.

Opening a file is similar to opening a door. It gives access to the space inside. Once the program is done, it must close the file to release that access.

6. File Opening Modes

When a program opens a file, it must specify how the file will be used. The most common modes are:

  • Read mode

  • Write mode

  • Append mode

Each mode has its own behavior. Selecting the correct mode ensures that the program does not lose data or cause errors.

7. Reading From Files

Reading from a file means taking information stored on disk and bringing it into the program. This allows the program to display text, process records, or perform calculations using stored values. Reading is similar to opening a book and scanning its content.

The program reads data sequentially unless otherwise instructed. The data can be read character by character, line by line, or block by block. After reading, the program may use the data for various purposes, such as computing results, filtering values, or generating output.

8. Writing to Files

Writing to a file stores new data permanently. When a program writes to a file, it sends bytes to the storage device. These bytes are arranged sequentially inside the file. Writing allows programs to generate output that can be used later, even after the program stops.

Writing is commonly used in:

  • Reports

  • Logs

  • Database storage

  • Exporting results

  • Creating documents

A file created in write mode will overwrite any existing content with new data. Therefore, write mode must be used carefully when existing information must be preserved.

9. Append Mode

Appending adds new content to the end of an existing file. Instead of erasing or replacing the previous content, append mode keeps the old data and places new data after it. This is useful when programs must keep a growing history, such as:

  • Logging messages

  • Recording transactions

  • Collecting sensor data

  • Adding records to files

Appending is non-destructive. It preserves the entire file and only adds new data to the end. This makes append mode suitable for files where continuity of information is important.

10. The Difference Between Write and Append

Write mode clears the file before writing new content. Append mode keeps the file unchanged and extends it. Both modes store data, but the intention is different.

Write mode is used when:

  • Old data is no longer relevant

  • The program wants to start fresh

Append mode is used when:

  • Old data must be preserved

  • New entries must be added

Choosing the correct mode avoids accidental loss of information.

11. How Data Is Read From Files Internally

When data is read from a file, the operating system retrieves stored bytes from disk. These bytes are transferred into memory, where the program can use them. The process can be repeated multiple times until the end of file is reached.

The “end of file” is a condition that tells the program there is no more data to read. Programs must always detect the end of file to avoid errors. This prevents reading beyond the stored information.

12. How Data Is Written to Files Internally

When the program writes data to a file, the operating system must store it on the disk. The system collects bytes in memory and then writes them to disk blocks. This may occur immediately or after the program finishes. Efficient writing depends on buffering, which is the temporary storage of data before final writing.

Writing may be faster than reading because the program sends data directly. Reading may require locating blocks and performing lookup operations.

13. Keeping Track of File Position

Every file has a current position indicator. This indicator tells the program where the next read or write will occur. When a file is opened, the position is set at the beginning. After reading or writing some data, the position moves forward.

This pointer ensures that data is processed in sequence. The program can adjust the position if necessary. For example, it can jump to the end when appending, or go backward in certain cases. Understanding file position is essential for handling large files correctly.

14. Buffering in File Handling

File operations use buffering to improve performance. Instead of accessing the disk for every single byte, the system collects data in memory and writes it in bigger chunks. This reduces the number of disk operations, making file handling faster.

Buffering ensures that reading and writing are efficient. Programs that handle large files benefit greatly from this mechanism. Developers do not need to manage buffering directly. The system takes care of it internally.

15. Text Files and Binary Files

File handling in C supports two major types of files:

  • Text files

  • Binary files

Text files store human-readable characters. Binary files store raw data. Although they are different in representation, the method of reading and writing follows the same conceptual process. The difference lies in how the data is interpreted.

Text files are useful for logs, configurations, and documents. Binary files are used for images, audio, databases, and structured data. Understanding the conceptual difference helps in choosing the right format.

16. Error Handling in File Operations

File handling may fail for several reasons:

  • File does not exist

  • Disk is full

  • Permissions are denied

  • File is locked

  • Path is invalid

Programs must check for possible errors before reading or writing. Detecting errors early prevents failure and data loss. File operations return information that helps detect success or failure. Proper error handling is a key part of safe file management.

17. Closing Files

When a program closes a file, the system ensures that all buffered data is written to disk. It also frees memory associated with the file.

Failing to close files may lead to memory leaks, file corruption, or incomplete writing. Good programming practice requires always closing files once operations are done.

18. Real-World Use Cases

File handling is used in a wide range of applications:

  • Storing user preferences

  • Recording logs for debugging

  • Storing game progress

  • Saving form entries

  • Keeping transaction history

  • Collecting sensor readings

Any scenario where information must survive after the program ends requires file handling. Understanding these concepts is necessary for practical software development.

19. Reading Large Files Safely

Programs must read large files carefully. Instead of reading everything at once, data should be processed in segments. This prevents memory overload and allows the system to remain responsive.

Large-file reading is common in:

  • Data analytics

  • Log processing

  • File conversion tools

Segmented processing is efficient and scalable.

20. The Importance of Append Mode in Logging

Many systems generate logs. Logs record errors, events, messages, or activities. Append mode is ideal because logs must grow over time.

Appending each entry ensures that:

  • No previous information is lost

  • New events are added

  • The full history is preserved

This makes debugging and tracking activities easier.

Summary

File handling in C Languagae gives programs the ability to read, write, and append data to files on disk. Files allow data to persist beyond the lifetime of the program. Reading retrieves information from files. Writing creates or replaces file content. Append mode adds new content to existing files without deleting previous data.

The operating system manages the actual storage, while the program uses built-in functions conceptually. File position, buffering, and error handling ensure correct behavior. Every real application requires file handling to store and manage data.

Understanding how file handling works enables developers to build more powerful and meaningful programs. To learn these essential C programming skills through structured, hands-on training, explore our C Language Online Training Course. For a comprehensive development path that includes system-level programming, consider our Full Stack Developer Course.

Frequently Asked Questions

1.What is file handling in C?
Ans: File handling is the process of opening, reading, writing, appending, and closing files stored on disk.

2.What is the difference between write and append?
Ans: Write replaces the file content. Append keeps existing content and adds new data at the end.

3.Why do we need to close files?
Ans: Closing ensures that all data is written, and resources are released.

4.What types of files can C work with?
Ans: C can work with text and binary files using the same conceptual operations.

5.What is the end of file condition?
Ans: It is a signal that indicates there is no more data to read from the file.