ICT, English, Bangla,Higher Math , Physics, Chemistry, Biology, Economics
নীলগঞ্জ তাঁতিপাড়া রোড- ০১৭৯২-০৪৩৫৬৩
নীলগঞ্জ মোড় থেকে আল্প একটু সামনে যেয়ে বাপাশে
Syllabus:
Unit 9: Computer Network and Distributed System Basic computer network concept, network structure, network software, reference model, OSI model, TCP/IP model, x.25 networks, frame relay, atm network, medium access sub-layer, network layer, application layer, communication mediums, network topologies, communication devices, synchronous and asynchronous communication, transmission band; Introduction To Parallel and Distributed Systems: architecture, challenges, principle and paradigm, Security: threats and attacks, different malware and it's protection, policy and mechanism, design issue, cryptography and cryptographic algorithms, cryptographic protocols, key distribution, basic concept of naming services, dns, attribute based naming;
Distributed File Systems: client perspective, server perspective, NFS, coda, google file system(GFS). Parallel programming: parallel computing, parallel programming structure
A computer network is a collection of devices that are connected together to enable communication and the sharing of resources. The devices in a computer network can include computers, servers, printers, routers, switches, and other devices that can connect to a network. There are different types of computer networks, including local area networks (LANs), wide area networks (WANs), and metropolitan area networks (MANs). In a LAN, devices are connected in a small area such as a home, office, or school. WANs, on the other hand, connect devices across large geographical areas, such as across different cities or even countries. Networks use protocols, such as TCP/IP, to enable communication between devices. Networks can also be classified based on the types of protocols used, such as wired networks (using Ethernet cables) or wireless networks (using Wi-Fi). Overall, computer networks are essential for sharing resources, data, and communication in modern computing environments. Saice ice
The structure of a computer network refers to how the devices and components of the network are organized and connected to each other. There are several common network structures: Bus network: In a bus network, all devices are connected to a single cable (the "bus"). Data is transmitted along the cable and all devices receive it, but only the device to which the data is addressed actually processes it. Star network: In a star network, all devices are connected to a central hub or switch. Data is transmitted from one device to the hub/switch, which then forwards it to the intended recipient device. Ring network: In a ring network, devices are connected in a circular chain. Data travels around the ring in one direction, with each device passing it along to the next device until it reaches its destination. Mesh network: In a mesh network, devices are connected to each other directly, creating multiple paths for data to travel. This makes the network more fault-tolerant, as data can be rerouted if a connection fails. Hybrid network: A hybrid network combines two or more of the above structures to create a more complex network that can meet specific needs or requirements. The structure of a network can affect its speed, reliability, and scalability, and the choice of network structure depends on factors such as the size of the network, the type of data being transmitted, and the level of security required.
3 Network software refers to the programs and applications that are used to manage and control computer networks. Some common types of network software include: Network operating systems (NOS): These are specialized operating systems that are designed to manage and control network resources such as servers, printers, and user accounts. Examples of NOS include Microsoft Windows Server, Linux, and Novell NetWare. Network management software: This type of software is used to monitor and manage network performance, diagnose network issues, and control network access. Examples of network management software include SolarWinds Network Performance Monitor, Nagios, and PRTG Network Monitor. Protocol analyzers: These programs capture and analyze network traffic, helping network administrators to troubleshoot issues and optimize network performance. Examples of protocol analyzers include Wireshark, Tcpdump, and Microsoft Network Monitor. Remote access software: This type of software allows users to access network resources from remote locations. Examples include Microsoft Remote Desktop, Citrix Virtual Apps and Desktops, and LogMeIn. Security software: Security software is used to protect networks from unauthorized access, malware, and other threats. Examples include firewalls, antivirus software, intrusion detection and prevention systems (IDS/IPS), and VPNs. Collaboration software: Collaboration software allows users to share files, communicate, and work together on projects. Examples include Microsoft Teams, Slack, and Zoom. Overall, network software is essential for managing and controlling the various components of a computer network, ensuring that it is secure, reliable, and efficient.
4A network reference model is a framework for describing how data is transmitted over a network. The most well-known network reference model is the OSI (Open Systems Interconnection) model, which was developed by the International Organization for Standardization (ISO). The OSI model consists of seven layers, each of which performs a specific function in the transmission of data: Physical layer: This layer is responsible for transmitting raw bits over a physical medium, such as copper wire or fiber optic cable. Data link layer: This layer is responsible for ensuring that data is transmitted error-free over the physical medium. It does this by breaking data into frames and adding error detection and correction codes. Network layer: This layer is responsible for routing data between networks. It uses logical addresses, such as IP addresses, to identify devices on the network. Transport layer: This layer is responsible for ensuring that data is transmitted reliably between devices. It breaks data into segments and adds sequencing and error detection codes. Session layer: This layer establishes and manages connections between devices, allowing them to communicate with each other. Presentation layer: This layer is responsible for translating data into a format that can be understood by the receiving device. It may also perform encryption and compression. Application layer: This layer is responsible for providing network services to applications. Examples include email, file transfer, and web browsing. The OSI model is often compared to the TCP/IP model, which is a simplified model consisting of four layers: the network access layer, internet layer, transport layer, and application layer. The TCP/IP model is widely used in practice and has largely replaced the OSI model in most contexts.
6The OSI (Open Systems Interconnection) model is a conceptual framework for understanding how data is transmitted over a network. It was developed by the International Organization for Standardization (ISO) in the 1980s as a standard for communication between different computer systems. The OSI model consists of seven layers, each of which performs a specific function in the transmission of data. The layers are as follows: Physical layer: This layer is responsible for transmitting raw bits over a physical medium, such as copper wire or fiber optic cable. It deals with electrical, mechanical, and physical characteristics of the transmission medium. Data Link layer: This layer is responsible for ensuring that data is transmitted error-free over the physical medium. It does this by breaking data into frames and adding error detection and correction codes. This layer deals with protocols that govern access to the physical network medium. Network layer: This layer is responsible for routing data between networks. It uses logical addresses, such as IP addresses, to identify devices on the network. This layer establishes, maintains, and terminates connections between network devices. Transport layer: This layer is responsible for ensuring that data is transmitted reliably between devices. It breaks data into segments and adds sequencing and error detection codes. It is responsible for providing end-to-end error recovery and flow control. Session layer: This layer establishes and manages connections between devices, allowing them to communicate with each other. It enables processes running on different devices to establish a connection, maintain the connection during the communication session, and terminate the connection when the session is complete. Presentation layer: This layer is responsible for translating data into a format that can be understood by the receiving device. It may also perform encryption and compression. Application layer: This layer is responsible for providing network services to applications. Examples include email, file transfer, and web browsing. This layer interacts directly with the application software and provides a user interface for accessing network services. The OSI model is a conceptual framework and is not used directly in network implementation. However, it provides a useful way of understanding the different functions of network protocols and how they work together to transmit data over a network.
TCP/IP protocols TCP/IP (Transmission Control Protocol/Internet Protocol) is a suite of communication protocols used for transmitting data over the Internet or any network that uses the Internet Protocol (IP). It consists of several protocols that work together to facilitate data transmission, including: IP (Internet Protocol) - responsible for routing data packets between devices across a network. TCP (Transmission Control Protocol) - responsible for ensuring that data is transmitted reliably between devices. It breaks data into packets, sends them, and then verifies that they have been received correctly. UDP (User Datagram Protocol) - a simpler protocol than TCP that doesn't guarantee reliable data transmission but is faster. DNS (Domain Name System) - translates domain names into IP addresses, so devices can find each other on the Internet. SMTP (Simple Mail Transfer Protocol) - used for sending email messages between servers. HTTP (Hypertext Transfer Protocol) - used for transmitting data over the World Wide Web. FTP (File Transfer Protocol) - used for transferring files between computers on a network. These protocols work together to ensure that data can be transmitted between devices over the Internet or a network in a reliable, secure, and efficient manner.
5 X.25 is a protocol suite used for communication over packet-switched networks. It was widely used in the 1980s and early 1990s for connecting computers and other devices to wide-area networks, such as the Internet. X.25 networks use virtual circuits to establish a connection between devices, and they provide error correction and flow control mechanisms to ensure reliable data transmission. X.25 also includes a network layer protocol that defines how packets are routed between devices. Although X.25 networks are no longer widely used today, they played an important role in the development of packet-switched networking and helped pave the way for the Internet. Some legacy systems may still use X.25 for communication, but it has largely been replaced by newer technologies like TCP/IP and other protocols that are more efficient and provide greater bandwidth.
Frame Relay is a standardized wide area network (WAN) technology that was widely used in the 1990s and early 2000s for connecting LANs (Local Area Networks) over long distances. Frame Relay operates at the data link layer of the OSI model and provides a packet-switched service, similar to packet switching in TCP/IP networks. It uses virtual circuits to establish connections between devices, allowing multiple devices to share the same network resources. In a Frame Relay network, data is transmitted in small units called frames. Each frame contains a header that includes information about its destination and the virtual circuit it belongs to, as well as error detection and control information. The network uses this information to route the frames to their destination. Frame Relay networks provide a number of advantages, such as high bandwidth efficiency, low overhead, and low latency. However, they also have some disadvantages, such as a lack of error correction, which can lead to dropped frames and retransmissions. Frame Relay has largely been replaced by newer WAN technologies, such as MPLS (Multiprotocol Label Switching) and VPN (Virtual Private Network), but it is still used in some legacy systems and in some parts of the world where newer technologies have not yet been widely adopted.
ATM (Asynchronous Transfer Mode) is a high-speed networking technology that was developed in the 1980s and 1990s for transmitting data, voice, and video over wide area networks (WANs) and local area networks (LANs). ATM is a packet-switched technology that breaks data into fixed-sized cells of 53 bytes each. Each cell contains a header that includes information about its destination and the virtual circuit it belongs to, as well as error detection and control information. The network uses this information to route the cells to their destination. ATM provides a number of advantages over other networking technologies, such as high bandwidth, low latency, and support for multiple traffic types (data, voice, and video). It also provides Quality of Service (QoS) guarantees, allowing network administrators to prioritize traffic and allocate network resources accordingly. ATM networks can be configured in a variety of topologies, including point-to-point, point-to-multipoint, and multipoint-to-multipoint. They can also be used to create virtual private networks (VPNs) that provide secure connections between geographically dispersed sites. Although ATM was widely used in the 1990s and early 2000s, it has largely been replaced by newer technologies such as MPLS (Multiprotocol Label Switching) and Ethernet. However, ATM is still used in some legacy systems and in some parts of the world where newer technologies have not yet been widely adopted.
-----
The Medium Access Control (MAC) sub-layer is a sub-layer of the Data Link Layer in the OSI model of computer networking. It is responsible for managing access to the physical transmission medium, such as a shared network cable or wireless frequency spectrum, and coordinating the transmission of data between devices on the network. The MAC sub-layer provides services such as addressing, channel access control, flow control, and error recovery. It determines how to share the network medium among multiple devices and how to transmit data without collisions. The most common MAC protocols are Carrier Sense Multiple Access with Collision Detection (CSMA/CD) for Ethernet networks and Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) for wireless networks. Overall, the MAC sub-layer plays a critical role in enabling reliable and efficient communication between devices on a network by managing access to the shared medium.
The Network Layer is the third layer in the OSI model of computer networking, situated above the Data Link Layer and below the Transport Layer. It provides network-to-network connectivity by routing data packets between different networks, regardless of the specific physical technology being used by each network. The main function of the Network Layer is to route data packets through a network based on logical network addresses, such as IP (Internet Protocol) addresses. It accomplishes this by encapsulating the data received from the Transport Layer into packets, adding the source and destination IP addresses, and determining the most efficient path for the packet to reach its destination through the use of routing protocols. Some key features and services of the Network Layer include: Logical addressing: The Network Layer provides logical addressing, such as IP addresses, to uniquely identify devices on a network. Routing: The Network Layer determines the optimal path for data packets to travel through a network, using routing protocols such as OSPF (Open Shortest Path First) or BGP (Border Gateway Protocol). Fragmentation and reassembly: The Network Layer may fragment large data packets into smaller packets for transmission across networks with smaller maximum transmission units, and reassemble them at the destination. Quality of Service (QoS): The Network Layer can prioritize certain types of traffic, such as real-time voice or video data, over other types of traffic to ensure reliable and efficient delivery. Overall, the Network Layer is responsible for ensuring end-to-end connectivity and reliable transmission of data across different networks.
Parallel and distributed systems are computer systems that have multiple processors or computers working together to solve a problem. They are designed to handle large and complex tasks by breaking them down into smaller tasks that can be distributed among the processors or computers. Parallel systems consist of multiple processors working together in a shared memory architecture. Each processor has access to the same shared memory and can communicate with each other through it. Parallel systems can be further classified as shared memory systems and distributed memory systems. Distributed systems, on the other hand, consist of multiple computers connected through a network. Each computer has its own memory and processor, and communication between the computers is achieved through the network. Distributed systems can be further classified as client-server systems and peer-to-peer systems. The main advantage of parallel and distributed systems is their ability to perform tasks faster and more efficiently than a single processor or computer can. They can also handle tasks that would be too large or complex for a single processor or computer to handle. Examples of applications that use parallel and distributed systems include weather forecasting, scientific simulations, and data mining. However, designing and programming parallel and distributed systems can be challenging due to the need to coordinate and synchronize the activities of multiple processors or computers. Additionally, communication and synchronization overhead can lead to decreased performance if not managed properly. In summary, parallel and distributed systems are computer systems that have multiple processors or computers working together to solve a problem. They offer significant advantages in terms of performance and scalability but require careful design and programming to achieve optimal performance.
Architecture: Parallel and distributed systems are computer systems that have multiple processors or computers working together to solve a problem. These systems are designed to handle large and complex tasks by breaking them down into smaller tasks that can be distributed among the processors or computers. Parallel systems consist of multiple processors working together in a shared memory architecture, while distributed systems consist of multiple computers connected through a network. Challenges: Designing and programming parallel and distributed systems can be challenging due to the need to coordinate and synchronize the activities of multiple processors or computers. Additionally, communication and synchronization overhead can lead to decreased performance if not managed properly. Other challenges include load balancing, fault tolerance, and scalability. Principles: The principles of parallel and distributed systems include parallelism, distribution, concurrency, and locality. Parallelism refers to the ability to divide a task into smaller sub-tasks that can be executed simultaneously on multiple processors or computers. Distribution refers to the ability to distribute the sub-tasks among the processors or computers in the system. Concurrency refers to the ability to execute multiple sub-tasks simultaneously. Locality refers to the ability to minimize communication and synchronization overhead by ensuring that each processor or computer has access to the data it needs. Paradigms: The paradigms of parallel and distributed systems include shared memory, message passing, and data parallelism. Shared memory systems use a single shared memory space that all processors have access to, while message-passing systems use message-passing to communicate between processors or computers. Data parallelism involves dividing a large data set into smaller data sets and performing the same operation on each smaller data set simultaneously on different processors or computers. In summary, parallel and distributed systems offer significant advantages in terms of performance and scalability, but designing and programming these systems can be challenging. The principles and paradigms of parallel and distributed systems, including parallelism, distribution, concurrency, locality, shared memory, message passing, and data parallelism, are essential to understanding how to design and program these systems effectively.
Security threats and attacks are malicious activities carried out by individuals or groups with the intent of compromising the confidentiality, integrity, or availability of computer systems or networks. Some common types of security threats and attacks include: Malware: Malware refers to any software that is designed to harm computer systems or networks. Examples include viruses, Trojans, and ransomware. Phishing: Phishing is a type of social engineering attack where an attacker attempts to trick a victim into revealing sensitive information such as passwords, credit card numbers, or personal information. Denial of Service (DoS) attacks: DoS attacks involve flooding a network or website with traffic, causing it to become overwhelmed and unavailable to users. Insider threats: Insider threats involve employees or other trusted individuals who use their access to a company's systems or information for malicious purposes. Advanced Persistent Threats (APTs): APTs are complex attacks that involve an attacker gaining access to a network and remaining undetected for an extended period of time. Man-in-the-middle attacks: Man-in-the-middle attacks occur when an attacker intercepts communication between two parties and has the ability to eavesdrop, manipulate, or modify the communication. Password attacks: Password attacks involve an attacker attempting to gain unauthorized access to a system by guessing or cracking a user's password. It is important to protect against security threats and attacks by implementing security measures such as firewalls, antivirus software, and intrusion detection systems, as well as regularly updating software and educating users on safe computing practices.
Malware, short for malicious software, refers to any software designed to harm computer systems or networks. Here are some of the different types of malware and their protections: Viruses: Viruses are designed to replicate themselves and spread to other computers. They can cause damage by corrupting files or deleting data. To protect against viruses, it is important to install and regularly update antivirus software, avoid opening suspicious email attachments, and be cautious when downloading files from the internet. Trojans: Trojans are a type of malware that disguise themselves as legitimate software. Once installed, they can give an attacker remote access to a system or steal sensitive information. To protect against Trojans, it is important to only download software from trusted sources, avoid clicking on suspicious links or pop-up ads, and regularly update software. Ransomware: Ransomware is a type of malware that encrypts a victim's files and demands payment in exchange for the decryption key. To protect against ransomware, it is important to regularly backup important data, use antivirus software, and avoid clicking on suspicious links or opening suspicious email attachments. Adware: Adware is a type of malware that displays unwanted ads on a victim's computer. To protect against adware, it is important to use ad-blocking software and avoid downloading software from untrusted sources. Spyware: Spyware is a type of malware that is designed to collect personal information from a victim's computer. To protect against spyware, it is important to use antivirus software, regularly update software, and avoid downloading software from untrusted sources. Rootkits: Rootkits are a type of malware that allow an attacker to gain root access to a victim's system. To protect against rootkits, it is important to use antivirus software and regularly update software. In addition to these measures, it is important to practice safe computing habits, such as using strong passwords, avoiding public Wi-Fi networks, and being cautious when clicking on links or downloading files from the internet.
Cryptography is the practice of securing communication in the presence of adversaries. Cryptography is achieved by transforming plaintext, or the original message, into ciphertext, which is a scrambled version of the plaintext. This process is known as encryption. The recipient of the message can then use a decryption algorithm to transform the ciphertext back into plaintext. Cryptographic algorithms are mathematical functions that are used to perform encryption and decryption. There are two main types of cryptographic algorithms: symmetric and asymmetric. Symmetric algorithms use the same key for both encryption and decryption. This means that the sender and receiver both have the same key, which must be kept secret from attackers. Examples of symmetric algorithms include AES (Advanced Encryption Standard) and DES (Data Encryption Standard). Asymmetric algorithms, also known as public-key algorithms, use two keys: a public key and a private key. The public key is used for encryption, while the private key is used for decryption. This allows for secure communication between two parties without the need for a shared secret key. Examples of asymmetric algorithms include RSA and Diffie-Hellman key exchange. In addition to encryption and decryption, cryptographic algorithms are also used for other purposes such as digital signatures and authentication. Digital signatures allow for verification of the authenticity of a message, while authentication ensures that the sender and receiver are who they claim to be. Overall, cryptography is an important tool for securing communication and protecting sensitive information. It is used in a variety of applications, including online banking, e-commerce, and secure communication between individuals and organizations.
Cryptographic protocols are sets of rules and procedures that govern the secure exchange of information between two or more parties. These protocols use cryptographic algorithms to ensure the confidentiality, integrity, and authenticity of data. Examples of cryptographic protocols include SSL/TLS for secure web browsing and SSH for secure remote access to computer systems. Key distribution is the process of securely distributing cryptographic keys to authorized parties. This is typically done using a key distribution center (KDC) or a public key infrastructure (PKI). The KDC is a centralized server that generates and distributes symmetric keys, while the PKI uses asymmetric cryptography to distribute public keys and verify the identity of parties. Naming services are used to map human-readable names to network addresses. One common naming service is the Domain Name System (DNS), which is used to translate domain names into IP addresses. DNS works by maintaining a hierarchical system of domain names and servers, allowing for efficient resolution of name queries. Attribute-based naming is a type of naming system that allows for the use of attributes, rather than names, to identify resources. This can be useful in situations where resources are highly dynamic or difficult to name conventionally. Attribute-based naming systems often use a distributed naming service to map attributes to resource identifiers. Examples of attribute-based naming systems include the Resource Description Framework (RDF) and the Extensible Markup Language (XML).
Distributed File Systems: From a client perspective, a distributed file system allows the client to access files on a remote server as if they were stored locally. The client interacts with the distributed file system through a set of system calls that are similar to those used for accessing local files. The distributed file system is responsible for managing the location and replication of files across multiple servers. From a server perspective, a distributed file system allows multiple servers to collaborate to provide a unified file system to clients. The servers work together to manage the storage and access of files, ensuring that files are replicated for fault tolerance and load balancing. NFS (Network File System) is a widely used distributed file system that allows clients to access files on remote servers using a set of standard system calls. NFS is designed to be simple and efficient, making it popular for use in UNIX and Linux environments. Coda is a distributed file system that is designed to provide high availability and reliability, even in the face of network failures or server crashes. Coda uses a disconnected operation model, which allows clients to continue accessing files even if they are temporarily disconnected from the network. Google File System (GFS) is a distributed file system developed by Google to handle the massive amounts of data generated by its search engine and other services. GFS is designed for high throughput and reliability, with a focus on scalability and fault tolerance. Parallel Programming: Parallel computing refers to the use of multiple processors or cores to perform computations in parallel, with the goal of improving performance and efficiency. Parallel programming involves designing algorithms and writing code that can take advantage of parallel architectures. The structure of parallel programming involves breaking a problem into smaller, independent tasks that can be executed in parallel. These tasks can then be assigned to multiple processors or cores, allowing them to be executed concurrently. Parallel programming typically involves the use of parallel constructs, such as parallel loops or parallel sections, that allow programmers to specify which parts of the program should be executed in parallel. Parallel programming can be challenging, as it requires careful design and management of shared resources, such as memory and communication channels. However, the potential benefits of parallel computing, such as improved performance and scalability, make it a valuable tool for a wide range of applications.
0 comments:
Post a Comment