Gaussian eliminationalgorithm is useful to compute the solutions of a linear system equations. However, it is impractical for large number of variables as number of operations scales cubically. In...

2 answer below »

Gaussian eliminationalgorithm is useful to compute the solutions of a linear system equations. However, it is impractical for large number of variables as number of operations scales cubically.


In this project, you are going to demonstrate and compare numerical algorithms using Python. Here are some examples:


1. Jacobi Method


2. Gauss-SiedelMethod


3. Succesiveover relaxation method






In addition toGaussianelimination. Explain each of the threemethodsyou have chosen.Choose or randomly generatea large system of linear equations. It should be large enough to differentiate the speed of the algorithms, but do not choose something your device cannot handle.Then compare advantages and disadvantages of each method such as precondition and the implementationtime. Gaussian elimination method may fail but be sure that the other methods give you similar numerical solution.


You need to submit your project in Jupyternotebook

Answered Same DaySep 23, 2021

Answer To: Gaussian eliminationalgorithm is useful to compute the solutions of a linear system equations....

Neha answered on Oct 02 2021
156 Votes
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "Untitled2.ipynb",
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "6ErEbO85yb6T"
},
"source": [
"In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi."
]
},
{
"cell_type": "code",
"metadata": {
"id": "IZK9WFoRxu_y"
},
"source": [
"def jac_algo_run(Matrix, RowVector, x_init):\n",
" D = np.diag(np.diag(Matrix))\n",
" LU = Matrix - D\n",
" x = x_init\n",

" count = 0\n",
" for i in range(ITERATION_LIMIT):\n",
" D_inv = np.diag(1 / np.diag(D))\n",
" dot_res = np.dot(LU, x)\n",
" nextiter = np.dot(D_inv, RowVector - dot_res )\n",
" if np.linalg.norm(nextiter - x) < EPSILON:\n",
" return nextiter , count\n",
" x = nextiter\n",
" count = count + 1\n",
" return x , count\n",
"\n",
"def jac_algo(Matrix, RowVector):\n",
" x_init = np.zeros(len(RowVector))\n",
" x , count = jac_algo_run(Matrix, RowVector, x_init)\n",
" return x , count\n"
],
"execution_count": 2,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "DykrR9tqyPMo"
},
"source": [
"Gaussian elimination, also known as row reduction, is an algorithm in linear algebra for solving a system of linear equations. It is usually understood as a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix.\n",
"\n",
"To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:\n",
"\n",
"●\tSwapping two rows,\n",
"\n",
"●\tMultiplying a row by a nonzero number,\n",
"\n",
"●\tAdding a multiple of one row to another row.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "EheiqOQWxzeD"
},
"source": [
"def gauss_algo(Matrix, RowVector):\n",
" count = 0\n",
" thisIter = np.ones(len(RowVector))\n",
" for it_count in range(1, ITERATION_LIMIT):\n",
" nextIter = np.zeros_like(thisIter)\n",
" for i in range(Matrix.shape[0]):\n",
" s1 = np.dot(Matrix[i, :i], nextIter[:i])\n",
" s2 = np.dot(Matrix[i, i + 1:], thisIter[i + 1:])\n",
" nextIter[i] = (RowVector[i] - s1 - s2) / Matrix[i, i]\n",
" count = count + 1\n",
" if np.allclose(thisIter, nextIter, rtol=EPSILON):\n",
" break\n",
" thisIter = nextIter\n",
" return thisIter, count"
],
"execution_count": 3,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "UjX7w6cDyowZ"
},
"source": [
"In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either strictly diagonally dominant, or symmetric and positive definite. It was only mentioned in a private letter from Gauss to his student Gerling in 1823. A publication was not delivered before 1874 by Seidel."
]
},
{
"cell_type": "code",
"metadata": {
"id": "nyOCCComx3IC"
},
"source": [
"def relax_algo_run(Matrix, RowVector):\n",
" phiVal = np.zeros(len(RowVector))\n",
" count = 1;\n",
" residual = np.linalg.norm(np.matmul(Matrix, phiVal) - RowVector)\n",
" while residual > EPSILON:\n",
" for i in range(Matrix.shape[0]):\n",
" sigmaGuess = 0\n",
" for j in range(Matrix.shape[1]):\n",
" if i != j:\n",
" sigmaGuess += Matrix[i][j] * phiVal[j]\n",
" diff = (RowVector[i] - sigmaGuess)\n",
" div = (OMEGA / Matrix[i][i])\n",
" one_minus = (1 - OMEGA)\n",
" phiVal[i] = one_minus * phiVal[i] + div * diff\n",
" halfVec = np.matmul(Matrix, phiVal) - RowVector\n",
" residual = np.linalg.norm(halfVec)\n",
" count = count + 1\n",
" return phiVal , count\n",
"\n",
"\n",
"def relax_algo(Matrix, RowVector):\n",
" phi , count = relax_algo_run(Matrix, RowVector)\n",
" return phi , count\n"
],
"execution_count": 4,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "Gaywx_mYyyKU"
},
"source": [
"In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process."
]
},
{
"cell_type": "code",
"metadata": {
"id": "_9MWo2AAx6Vb"
},
"source": [
"ef get_time(obj, A, b):\n",
" start_time = time.time()\n",
" x , itr = obj(A, b);\n",
" end_time = time.time()\n",
" print(\"Solution: {0}\".format( x ))\n",
" return (end_time - start_time) * 1000 , itr"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "zG6fQCGNy2Ax"
},
"source": [
"Differences\n",
"1.\tConvergence is fastest in successive over relaxation followed by Gauss-Seidel, who in turn lies in front of Jacobi method.\n",
"2.\tJacobi’s method Considers the updated values only from the second iteration.The first iteration is purely based on the initial guess values provided. But Gauss Seidal Considers the updated values right from the first iteration itself. Due to this, the rate of Convergence of Gauss Seidel is much faster than Jacobi’s technique.\n",
"3.\tSuccessive Overrelaxation (SOR) Method, is a generalization of and improvement on the Gauss-Seidel Method. Idea is: For any iterative method, in finding x(k+1) from x(k), we move a certain amount in a particular direction from x(k) to x(k+1). This direction is the vector x(k+1) − x(k), since x(k+1) = x(k) + (x(k+1) − x(k)). If we assume that the direction from x(k) to x(k+1) is taking us closer, but not all the way, to the true solution x , then it would make sense to move in the same direction x(k+1) − x(k), but farther along that direction.\n",
"4.\tGaussian elimination fails if any of the pivots is zero, it is worse yet if any pivot becomes close to zero. In this case, the method can be carried to completion, but the obtained results may be totally wrong.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "en579FS5xokI",
"outputId": "efd3104a-e5a7-4cb5-9110-c167bb7b91f3",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 935
}
},
"source": [
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import time\n",
"\n",
"ITERATION_LIMIT = 2000\n",
"EPSILON = float(1e-8)\n",
"OMEGA = 0.8\n",
"\n",
"def run_test_case(A, b):\n",
" print(\"System of equations:\")\n",
" for i in range(A.shape[0]):\n",
" row = [\"{0:3g}*x{1}\".format(A[i, j], j + 1) for j in range(A.shape[1])]\n",
" print(\"[{0}] = [{1:3g}]\".format(\" + \".join(row), b[i]))\n",
"\n",
" print(\"Profiling....\")\n",
" t1 , itr1 = get_time(jac_algo, A, b);\n",
" print(\"--- Jacobi took %s milli seconds ---\" % (t1))\n",
" print(itr1)\n",
" t2 , itr2 = get_time(gauss_algo, A, b);\n",
" print(\"--- Gauss–Seidel took %s milli seconds ---\" % (t2))\n",
" print(itr2)\n",
" t3 , itr3 = get_time(relax_algo, A, b);\n",
" print(\"--- Successive over-relaxation took %s milli seconds ---\" % (t3))\n",
" print(itr3)\n",
" return [t1 , t2 , t3] , [itr1 , itr2 , itr3]\n",
"\n",
"\n",
"if __name__ == '__main__':\n",
" Matrix = np.array([\n",
" [5, 2, 1, 1],\n",
" [2, 6, 2, 1],\n",
" [1, 2, 7, 1],\n",
" [1, 1, 2, 8]\n",
" ])\n",
" RowVector = np.array([29, 31, 26, 19])\n",
"\n",
" times , iters = run_test_case(Matrix, RowVector);\n",
" x_axis = [\"Jacobi\" , \"Gauss-Siedel\" , \"Successive Over Relaxation\"];\n",
" y_axis = iters\n",
" fig = plt.figure(figsize = (10, 5))\n",
" plt.bar(x_axis, y_axis, color ='maroon', width = 0.2)\n",
" plt.xlabel(\"Numerical methods\")\n",
" plt.ylabel(\"Iterations taken\")\n",
" plt.title(\"Convergence of algorithms\")\n",
" plt.savefig('fig.png')\n",
" ############################################################\n",
" y_axis = times\n",
" fig = plt.figure(figsize = (10, 5))\n",
" plt.bar(x_axis, y_axis, color ='green', width = 0.2)\n",
"\n",
" plt.xlabel(\"Numerical methods\")\n",
" plt.ylabel(\"Time taken(ms)\")\n",
" plt.title(\"Time taken by algorithms\")\n",
" plt.savefig('fig2.png')\n"
],
"execution_count": 5,
"outputs": [
{
"output_type": "stream",
"text": [
"System of equations:\n",
"[ 5*x1 + 2*x2 + ...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here