This criterion is linked to a learning outcomeQ14 marks: Explain the difference between data and task parallelism with examples. 4 marks: Discuss the parallel processing architectures that best suit...

1 answer below »
This criterion is linked to a learning outcomeQ14 marks: Explain the difference between data and task parallelism with examples.
4 marks: Discuss the parallel processing architectures that best suit these two types of parallelism, respectively.8pts
This criterion is linked to a learning outcomeQ22 marks: In a multicore system with multiple hardware threads, is it useful if the OS is aware of the hardware threads?
2 marks: Explain how this helps improve system performance.4pts
This criterion is linked to a learning outcomeQ3. Multiprocessors may use a shared queue or private queues (one for each of the processors).3 marks: Discuss the advantage and disadvantages of using a shared ready queue.
3 marks: Discuss the advantage and disadvantages of using private queues.6pts
This criterion is linked to a learning outcomeQ43 marks: Briefly explain how sum reduction works.
3 marks: In addition to the difference in communication methods (shared memory vs message passing), what is the key difference between shared memory and message passing multiprocessors performing sum reduction?6pts
This criterion is linked to a learning outcomeQ5. CPU, GPU and DPU are commonly used in the cloud for a variety of tasks.COSC2626 only.
Discuss the kind of tasks that best suits each of the processors.
2 marks each.
COSC2640 only.
Discuss what parallel processing architectures these processors use and why.
2 marks each.6pts
This criterion is linked to a learning outcomeQ6. Briefly describe how the Internet is structured, listing the key devices.2 marks each.6pts
This criterion is linked to a learning outcomeQ7. Discuss what makes a Tier-1 ISP and how ISP and CSP networks are connected. What is the financial implication?3 marks for Tier-1 ISP.
3 marks for how ISP/CSP are connected, including financial implications.6pts
This criterion is linked to a learning outcomeQ8. Explain how Internet protocols are organised, and its benefits and weaknesses. According to the organisation, what are the protocols that need to be processed by intermediate nodes such as routers and switches (ignore security concerns or special purposes).4 marks for layered approach including layers.
2 marks for benefits.
2 marks for weakness.
2 marks for the protocols that need to be processed by intermediate nodes such as routers and switches.10pts
This criterion is linked to a learning outcomeQ9. From a network topology point of view, discuss and explain the similarity and differences between institutional networks and data centre networks.2 marks for similarity.
6 marks for differences.8pts
This criterion is linked to a learning outcomeQ10. A data centre may provide many different applications and services at the same time, such as search engines, web hosting, email, video streaming, etc.COSC2626 only.
As a result, it has to handle a tremendous number of requests for these different applications. Describe how the requests are distributed and handled.
2 marks each key point.
COSC2640 only.
Discuss if it is a good idea to expose the servers to the clients, that is, allowing clients to contact these servers directly. Explain your choice.
2 marks each key point.6pts
This criterion is linked to a learning outcomeQ11. You are tasked to design the tools for live VM migration within a data centre. Discuss and compare the data transfer protocols that you may use.2 marks for each data transfer protocol.6pts
This criterion is linked to a learning outcomeQ12. In video streaming over HTTP such as YouTube, there is a large variation in the amount of bandwidth available to a client, across different clients or overtime for the same client. Is this an issue? Explain how a streaming protocol would handle this issue.1 mark for each of the six key points.6pts
This criterion is linked to a learning outcomeQ13. Suppose you are visiting a news website. The news agency has its video content hosted in a third-party data centre, say, AWS. While reading a piece of news, you clicked a video link. Describe the procedure of how the video clip was retrieved.The marks will be granted to the correct steps.6pts
This criterion is linked to a learning outcomeQ14COSC2626 only.
When a client sends a request to a CDN, the CDN needs to determine the server that is going to provide the requested content. Discuss how the server may be determined as well as the corresponding issues.
2 marks for each key point.
COSC2640 only.
A CDN hosts both static and dynamic content. Some are of local significance while some are of global significance. Discuss where the different types of content are stored as well as server capacity implications.
2 marks for each key point.6pts
Total points:90
Answered 3 days AfterMay 29, 2022

Answer To: This criterion is linked to a learning outcomeQ14 marks: Explain the difference between data and...

Dr Raghunandan G answered on Jun 01 2022
85 Votes
1.
Data parallelism refers to the distribution of data over multiple processors in a parallel computing environment. To facilitate simultaneous work, it distributes the data among multiple nodes so that they can all access it at once. Standard data structures like arrays and matrices can take advantage of it because it operates in parallel on each of their constituent members. Task parallelism is a different kind of parallelism.(O. S. Simsek 2018) Let's say we want to add up all the elements in the array, and each addition will take Ta time units. The procedure's execution time will be nTa time units because it adds up all the entries in an array. However, the processing time lowers to (n/4)Ta + merging when running the operation in data parallel on four processors. For the same input, different subsets of it are processed. units of time in the sky Parallel processing is four times faster than sequential processing.
.
· The calculation is done in a synchronous manner.
· The speed increase is greater since just one processing unit is functioning on all sets of data.
· Parallelization is dependent on the magnitude of the source.
· This is made for a multiprocessor system's best load balancing.
In parallel computing systems, task parallelism is a way to spread computer code across several processors. It is also known as function parallelism and control parallelism. Task parallelism is all about splitting tasks that are being done at the same time by processes or threads across multiple processors. Task parallelism is different from data parallelism, which means doing the same work on different parts of the data. Instead, it means doing many different tasks on the same data at the same time. Pipelining is a common type of task parallelism that involves sending the same set of data through a series of different jobs that can run independently of each other.
· Various tasks are carried out on the same or distinct information.
· The calculation is done asynchronously.
· Because each CPU will run a distinct thread or process on the same or different piece of data, there will be less speedup.
· The degree of parallelization is relative to the quantity of jobs that are done independently.
· Task scheduling is dependent on the processor's availability as well as scheduling strategies such as static and dynamic scheduling.
PRAM and VLSI
Without taking into account physical restrictions or implementation concerns, the ideal model provides a viable foundation for constructing parallel algorithms.Special purpose complementary machines, cloud computing, data centres, vector processors, implementation circuit boards, general-purpose computing on graphics processing units (GPGPU), and reprogrammable data processing with FPGA are some of the other parallel processing configurations.
2
A multiple-cpu computer is a single unified circuit, also called a chip multiprocessor or CMP, with several core processing units, or cores.
At the heart of every processor is an operational engine, also called a core. The software programmes that are stored in the system memory tell the core how to handle data and instructions. Over time, engineers found that each new microprocessor design had flaws. Multicore processing makes things go faster by letting multiple apps run at the same time. Instead of using separate processors or computers, an integrated circuit's smaller number of cores makes it possible to use resources and store data faster.
1. Effectiveness in terms of energy. Engineers can reduce the amount of microcontrollers by adopting multicore processors. They counteract increasing heat generation caused by Moore's Law (smaller circuits enhance electrical resistance, resulting in greater heat), reducing the demand for cooling. Multicore processing cuts low downforce usage (less energy is lost as heat), extending battery life.
2. True Concurrency is when two or more things happen at the same time. Multiprocessors computing extends the fundamental support for true (as distinct to imaginary) parallelization inside specific computer programs throughout various application by assigning programs to various processors.
3. Performance. Multicore processing boosts speed by allowing numerous apps to operate at the same time. When opposed to using independent processor or computers, the smaller amount among cores on an integrated circuit allows for faster resources latency and cache rates. The extent of the speed boost, on the other hand, is defined by the amount of cores, the degree of genuine parallelism in the application, and the utilization of resource sharing.
4. Isolation. When compared to the pure designs, multicore processors may enhance (but do not ensure) geographical and temporal separation (segregation).
3.
The procedure that is available instantly is stored in the Available Queue. This queue is made up of operations that are stored in system memory and are available instantly by the CPU. In most cases, this queue is saved as a linked list. The first and last PCBs in the list are referenced in the ready-queue header. A reference field to every PCB links to another PCB in the waiting list.
Advantages
Simple to set up
Scalable (no queue contention)
Improved cache proximity
Disadvantages – Load imbalance (some CPUs have more processes) – Procedures are treated unfairly, and CPU utilization is poorer.
Private queues are those that aren't posted in Active Directory and are only visible on the machine where they're stored.
Private queues offer the benefits of not requiring a directory service, making them faster to construct, no delay while contacting them, and no duplication costs. Because private queues aren't reliant on the directory service, they can be formed and removed even if the directory service is down. Because private queues aren't reliant on the directory service, they can be formed and removed even if the directory service is down.
Only the administrative queue supports reorganization operations.
4.
To begin, a decrease procedure is described as a measure with two arguments: an accumulator and the current iteration. It integrates different parameters into a fixed value each each repetition, which is then stored in the aggregate for another iteration.
1. Memory is maintained by network entities in the shared memory model, which can effectively communicate by accessing data, whereas interaction in the task passing model takes place through messages transferred between cooperative processes.
2. Memory access allows you to operate many processes simultaneously, but message forwarding does not.
3. Transmit (message) and recieve (message) are the two operations of the message passing facility (message) that has a fixed or adjustable size procedure.
4. Message passing is a good way to exchange smaller amounts of data because it doesn't require avoiding any conflicts. Message passing is also easier to set up than shared memory as a way for processes to talk to each other.
5. In systems with shared memory, system calls are only needed to set up shared memory regions. Once shared memory is set up, all memory accesses are treated like normal memory accesses, and the kernel doesn't need to help.
Shared memory is just what it sounds like: a storage area that more than one process can read and write to. It doesn't have built-in synchronisation. This means that the programmer must make sure that one activity doesn't erase the information from another. But in terms of bandwidth, it works because reading and understanding are fairly quick things to do.
A message queue is a one-way pipe. One operation sends messages to it, and another operation reads them in the order they were sent until there are no more messages to read.
The message size (bytes per message, generally minimal) and queue length (largest amount of outstanding entries) are defined when the queue is formed. Because each read/write operation is often a single message, access is slower than shared memory. The queue, on the other hand, ensures that each action will either successfully process a complete message or fail without affecting the queue. As a result, the author can never miss by simply writing a portion of a message, because the reader will either receive the entire message or nothing at all.
5.
A central processing unit, or CPU, is a combination of circuits which is in charge of carrying out the making the method up a computer program. For several years, CPUs, that are famed for their flexibility and reactivity, were the only configurable element in most systems.
A graphics processing unit, or GPU, is a specialized electrical circuit designed to handle and update memory quickly in order to speed up the creation of pictures in a frame buffer that will be delivered to a monitor screen.
Their ability to execute parallel processing allowed devices to be used to create rich, real-time visuals at first.Nevertheless, they are now suited for all types of rapid computing workloads. Graphics processing units (GPUs) are essential for applications involving ai technology, machine learning, and big data analytics due to these properties.
DPUs are a new type of programmable processor that consists of flexible and customizable accelerator engine that...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here