You cannot copy content of this page.

Aug 20,2012
Leave a comment
By admin

**Insert a value at a given position in an array:**

For inserting a value at a given position, the elements are shifted from the last element by one position to the right, and the element at the specified position is overwritten with the new value to be inserted.

/* Insertion at a position in an array */

#include <stdio.h>

#define MAX 10

int main(){

int arr[MAX],n,i,pos,val;

printf(“nEnter total numbers?”);

scanf(“%d”,&n);

for(i=0;i<n;i++){

printf(“nEnter number?”);

scanf(“%d”,&arr[i]);

}

printf(“nEnter the value to insert?”);

scanf(“%d”,&val);

printf(“nEnter the position where to insert?”);

scanf(“%d”,&pos);

for(i=n-1;i>=pos;i–)

arr[i+1] = arr[i];

arr[pos] = val;

printf(“nArray elements after insertion:”);

for(i=0;i<n+1;i++)

printf(“%dt”,arr[i]);

}

**Binary Search:**

Binary Search involves reducing the search range to half by dividing the range into half of its original size. Binary Search operates upon sorted array. It compares the element at the mid of this range with the value to be searched, if the value is smaller than the mid value, then the value is looked up in the range from first element to mid, otherwise the new search range becomes mid to last element. This process continues until the required element is located or lower bound becomes greater than upper bound. Efficiency of Binary Search is O(log2n) in average and worst case and is O(1) in best case. The ‘C’ program to perform Binary Search is given below:

/* Binary Search */

#include <stdio.h>

#define MAX 10

int main(){

int arr[MAX],i,n,val;

int lb,ub,mid;

printf(“nEnter total numbers?”);

scanf(“%d”,&n);

for(i=0;i<n;i++){

printf(“nEnter number?”);

scanf(“%d”,&arr[i]);

}

printf(“nEnter the value to search?”);

scanf(“%d”,&val);

lb=0;ub=n-1;

while(lb<=ub){

mid=(lb+ub)/2;

if(val<arr[mid])

ub = mid-1;

else if(val>arr[mid])

lb = mid+1;

else {

printf(“nNumber found…!”);

return;

}

}

printf(“nNumber not found…!”);

}

**Linear Search:**

Linear search involves searching the element from a given set of elements beginning from the first element upto the last element or upto its first occurrence if it is available in the given set. The efficiency of Linear Search is O(n). The program to perform linear search is given below:

/* Linear Search */

#include <stdio.h>

#define MAX 10

int main(){

int arr[MAX],n,i,num,val;

printf(“nEnter total numbers?”);

scanf(“%d”,&n);

for(i=0;i<n;i++){

printf(“nEnter number?”);

scanf(“%d”,&arr[i]);

}

printf(“nEnter the value to search?”);

scanf(“%d”,&val);

for(i=0;i<n;i++){

if(arr[i]==val){

printf(“nNumber found.”);

return;

}

}

printf(“nNumber not found.”);

}

- Linear arrays
- Memory representation/address calculation
- Operations
- Traversal
- Search
- Insertion
- Deletion

- Two Dimensional arrays

An array is a set of homogenous elements (of the same type) referenced under one name along with the subscript. The subscript is the index or the position that contains the element. In ‘C’, the subscript value begins with zero. Arrays may be one-dimensional, two-dimensional or multi-dimensional. Generally one-dimensional and two-dimensional arrays are used to represent real world entities for solving problems pertaining to a given domain. A one dimensional array is declared in “C” as follows:

int arr[10];

The above declaration declares an integer array of 10 integers referred to by the name arr. Similarly two dimensional arrays may be declared:

int arr[5][6];

The above declaration declares a two dimensional array, arr of 5 rows and 6 columns. Total integers that may be stored in the above array are 5 x 6 = 30. An array may be represented as an Abstract Data Type.

The address of the first element of the array is called its base address. Arrays are by default passed to functions by reference using this base address. Once the base address is available, remaining elements of the array may be accessed by manipulating the subscript identifier. In ‘C’, arr[i] is equivalent to writing *(arr+i), which refers to the i^{th} element of the array. Two dimensional arrays may be represented using row-major form, in which all the elements of the same rows are scanned first or in column-major form in which all the elements of the same column are scanned first beginning from the first column. We can perform various operations on arrays which will be explained in separate posts with relevant ‘C’ program.

1. Introduction to Algorithms

- Input, output, definiteness, finiteness, effectiveness.
- Top-down & Bottom-up approaches
- Algorithm complexity
- Space complexity
- Time complexity
- Time space trade off

- Big ‘O’ notation

An *algorithm* defines some process, how an operation works. It is specified in terms of simple steps. Each step must be clearly implementable in a mechanical way, perhaps by a single machine instruction, by a single programming language statement or by a previously defined algorithm. An algorithm to solve a problem must be *correct*; it must produce correct output when given valid input data. When the input data is invalid we usually want the algorithm to warn us and terminate gracefully. Often we want to know how much time an algorithm will take or how much space (computer memory) it will use and this leads to the analysis of algorithms and to the field of computational complexity.

Algorithms and data structures can be specified in any adequately precise language. English and other natural languages are satisfactory if used with care to avoid ambiguity but more precise mathematical languages and programming languages are generally preferred. The execution of the latter can also be automated. A *program* is the expression of some algorithm(s) and data structure(s) in a particular programming language. The program can be used on all types of computer for which suitable language compilers or interpreters exist, making it more valuable.

The basic features of an algorithm includes appropriate input, output, definiteness, finiteness and effectiveness. An algorithm must operate upon a given set of inputs to give desired output. An algorithm must definitely result in an output that is either known or desired from the execution of a program. An algorithm must be finite in itself designed within appropriate boundaries. An algorithm must be effective and efficient enough in terms of space and time considerations to minimize the complexity inherent in its implementation.

**Top down and Bottom up Approaches:**

In the **Top-down** model an overview of the system is formulated, without going into detail for any part of it. Each part of the system is then refined by designing it in more detail. Each new part may then be refined again, defining it in yet more detail until the entire specification is detailed enough to validate the Model. The “Top-down” Model is often designed to get an overview of the proposed system, but insufficient and irrelevant in understanding the elementary mechanisms.

By contrast in **Bottom-up** design individual parts of the system are specified in detail. The part are then linked together to form larger components, which are in turn linked until a complete system is formed. Strategies based on the “bottom-up” information flow seem potentially necessary and sufficient because they are based on the knowledge of all variables that may affect the elements of the system.

**Space Complexity:**

Space complexity theory measures the amount of space required to run an algorithm or a program. The better the time complexity of an algorithm is, the faster the algorithm will carry out its work in practice. Apart from time complexity, its space complexity is also important. This is essentially the number of memory cells required by an algorithm. A good algorithm keeps this number as small as possible.

**Instruction Space: **

Instruction space is the space for simple variables, fixed-size structure variables and constants.

Sometimes, it is essential to pre-determine the time required by an algorithm to execute successfully to give the desired result. Sometimes, it is required to estimate the time, a program may take for its execution, before it is actually executed. A good measure for the running time is the number of executed comparisons.

Time complexity theory measures the amount of time required to execute an algorithm or a program. Time complexity theory measures things in number of “steps”, or computer operations (like addition, subtraction, multiplication, assignment etc.). A count of the number of steps to run an algorithm is the “raw time complexity”. A raw time complexity of “4N^{2} – 5N + 41″ will be O(N^{2}). The complexity for constant values can all be reduced to one, say O(1).

The number of (machine) instructions which a program executes during its running time is called its time complexity. This number depends primarily on the size of the program’s input, that is approximately on the number of the strings to be sorted (and their length) and the algorithm used.

There is often a **time-space-tradeoff** involved in a problem, that is, it cannot be solved with few computing time and low memory consumption. One then has to make a compromise and to exchange computing time for memory consumption or vice versa, depending on which algorithm one chooses and how one parameterizes it.

Time complexities are always specified in the so-called **O-notation**in computer science. One way to describe complexity is by saying that the sorting method has running time **O(n ^{2})**. The expression O is also called

Mathematically speaking, O(n^{2}) stands for a set of functions, exactly for all those functions which, “in the long run”, do now grow faster than the function n^{2}, that is for those functions for which the function n^{2} is an upper bound (apart from a constant factor). To be precise, the following holds true: A function f is an element of the set O(n^{2}) if there are a factor c and an integer number n_{0 }such that for all n equal to or greater than this n_{0 }the following holds:

f(n) <= c . n^{2}

A function f from O(n^{2}) may grow considerably more slowly than n^{2} so that, mathematically speaking, the quotient f/n^{2} converges to 0 with growing n. An example of this is the function f(n) = n.

**Ways to calculate frequency/efficiency of an algorithm:**

**Eg. (i):**

for i = 1 to n

for j = 1 to n

i j No. of times

1 1 to n n

2 2 to n n-1

3 3 to n n-2

– – –

– – –

– – –

n-1 n-1 to n 2

n n to n 1

Therefore,

1+2+3+……………+(n-1)+n

=n(n+1)/2 (Sum of n terms)

Frequency, f(n) = n(n+1)/2 = n^{2}/2 + n/2 = 0.5 n^{2} + 0.5 n

A higher value of n^{2} will be most effective. 0.5 is a negligible value.

Therefore in Big(O) notation, f(n) = O(n^{2}).

**Eg. (ii):**

for i = 1 to n

sum = sum + I;

Big (O) Notation, f(n) = O(n)

Linear Search:

1. **Best Case:** When search value is found at first position

f(n) = 1

2. **Worst Case:** Value not in array or in the end

f(n) = n (n number of comparisons are made).

3. **Average Case:** 1, 2, 3, ………………………..n

(1 + 2 + 3 + ……………………… + n) / n

=n(n+1)/2.n = (n+1)/2

=n/2 + 1/2

=0.5 n + 0.5

f(n) = O(n)

Thus Big-O Notation is a useful measurement tool for measuring time complexity.