1. Parallel computing is when a large task is broken down into multiple tasks and each task is performed at the same time, or at the least as many at once as possible. This saves time when compared to sequential computing as multiple tasks are performed at once, rather than in order and waiting for one task to be done and then performing the next.

  2. The program will take 100ms to run as that is the length of the task that takes the longest to run. Assuming that each task starts at the same time, the task will finish when all three subtasks finish running, which is the length of the longest subtask.

  3. Parallel computing is more efficient as tasks are performed simultaneously, saving time compared to sequential computing, which requires waiting for each task to finish before starting the next task.

  4. Multiple routes/connections leading to the same path are known as redundancy, which when used excessively has its drawbacks, when used in moderation improves fault tolerance and prevents links going down from heavily impacting connectivity in the network.

  5. The above network is fault tolerant as it has multiple connects between each pair of devices