Can Normal Distribution Be Skewed: Detailed Facts, Examples And FAQs

Mazhar Image1 300x135 1

Normal distribution is skewed with zero skewness, so the answer to the most common confusion can normal distribution be skewed is normal distribution is not skewed distribution as the curve of the normal distribution is symmetric without tail whose skewness is zero. The normal distribution curve is bell shaped with symmetry on the curve.

Since the skewness is lack of symmetry in the curve so if the symmetry is present in the curve there is lack of skewness.

How do you tell if the data is normally distributed?

For the data to check whether normally distributed or not just try to sketch the histogram and from the curve of the curve if the symmetry is present in the curve then the data is normally distributed, from the curve of data itself the question can normal distribution be skewed or not cleared if the concept of skewness is clear. Sketching the histogram or curve in each case is tedious or time consuming so instead of that their are number of statistical tests like Anderson-Darling statistic (AD) which are more useful to tell whether data is normally distributed or not.

The data which follows normal distribution have zero skewness in the curve and the characteristics of the curve of the skewed distribution is different without symmetry, this we will understand with the following example:

Example: Find the percent of score lies between 70 to 80 if the score of mathematics of university students are normally distributed with the mean 67 and standard deviation 9?

Can Normal Distribution Be Skewed
symmetry in the normal distribution or can normal distribution be skewed

Solution:

To find the percent of score we follow the probability for the normal distribution discussed earlier in normal distribution, so to do so first we will convert into normal variate and follow the table discussed in normal distribution to find the probability using the conversion

Z=(X-μ)/σ

we want to find the score percent between 70 and 80 so we use random variable values 70 and 80 with the given mean 67 and standard deviation 9 this gives

Z=70-67/9 = 0.333

and

Z=80-67/9 = 1.444

This we can sketch as

image 126

the above shaded area shows the region between z=0.333 and z=1.444 from the table of standard normal variate the probabilities are

P(z > 0.333)=0.3707
and
P(z > 1.444)=0.0749
so
p(0.333 < z0.333)-P(z > 1.444)=0.3707-0.0749=0.2958

so 29.58% students will score between 70 to 80 .

In the above example the skewness of the curve is zero and the curve is symmetric, to check the data is normally distributed or not we have to perform the hypothesis tests.

How do you tell if a distribution is skewed left or right?

The distribution is known to be skewed if it is right tailed or left tailed in the curve so the depending on the nature of the curve we can judge whether the distribution is positive skewed or negative skewed. The concept of skewness is discussed in detail in the articles positively and negatively skewed distribution. If the symmetry in the left side lacks the distribution is skewed left and if the symmetry lacks in the right side the distribution is skewed right. The best way to check the distribution is skewed is to check the variation in the central tendencies that is if mean<median<mode then the distribution is left skewed and if mean>median>mode then the distribution is right skewed. The geometrical representation is as follows

image 127
left skewed distribution
image 128
right skewed distribution

The measures to calculate the skewness left or right for the information given in detail in the article of skewness.

What is an acceptable skewness?

Since the skewness as earlier discussed is lack of symmetry so what range is acceptable that must be clear. The question can normal distribution is skewed arise to check whether in the normal distribution is acceptable or not and the answer of the acceptable skewness is in normal distribution because in normal distribution the skewness is zero and the distribution in which skewness is near to zero is more acceptable. So after the testing for skewness if the skewness is nearer to zero then the skewness is acceptable depending on the requirement and range for the client.

In brief the acceptable skewness is the skewness which is nearer to zero as per the requirement.

How skewed is too skewed?

The skewness is the statistical measurement to check the symmetry present in the curve of the distribution and the information and all the measures to check skewness is present or not, depending on that we can find if the distribution is far from zero then too skewed or symmetry is zero then we can say the distribution is too skewed.

How do you determine normal distribution?

To determine the distribution is normal or not we have to look the distribution have the symmetry or not if the symmetry is present and the skewness is zero then the distribution is normal distribution, the detail methods and techniques were already discussed in detail in normal distribution

Do outliers skew data?

In the distribution data if any data follow unusual way and very far or away from the usual data that is known as outlier and in most of the cases the outliers are responsible for the skewness of the distribution and because of the unusual nature of outliers the distribution have skewness, so we can say that in the distribution the outliers skew data. The outliers in all cases will not skew data they skewed data only if they also follow the systematic sequence in continuous distribution to give left or right tailed curve.

In the previous articles the detail discussion of normal distribution and skewed distribution discussed.

Negatively Skewed Distribution: 9 Facts You Should Know

image 11 300x172 1

 Skewed Distribution | skewed distribution definition

    The distribution in which symmetry is not present and the curve of the distribution shows tail either left or right side is known as skewed distribution, so skewness is the asymmetry present in the curve or histogram apart from the symmetric or normal curve.

depending on the measure of central tendencies the nature of the distribution whether skewed or not can be evaluated there is special relations between mean, mode and median in left-tailed or right-tailed skewed distribution.

image 11
left-skewed distribution
image 12
right-skewed distribution

normal distribution vs skewed | normal vs skewed distribution

Normal distributionskewed distribution
In Normal distribution the curve is symmetricIn skewed distribution the curve is not symmetric
The measure of central tendencies mean, mode and median are equalThe measure of central tendencies mean, mode and median are not equal
mean=median =modemean>median>mode or mean<median<mode
Normal distribution vs skewed distribution

skewed distribution examples in real life

skewed distribution occurs in number of real life situation like the ticket sale of the particular show or movies in different months, record of athletes performance in competition, stock market returns, real estate rates fluctuation, life cycle of specific species, income variation, exam score and many more competitive outcomes. The distribution curve which shows asymmetry occurs frequently in applications.

difference between symmetrical and skewed distribution | symmetrical and skewed distribution

The main difference between the symmetrical distributions and skewed distribution is the differences between the central tendencies mean median and mode and in addition as the name suggest in the symmetrical distribution the curve of distribution is symmetric while in the skewed distribution the curve is not symmetric but have the skewness and it may be right-tailed or left tailed or may be both tailed also, the different distribution differs only on the nature of the skewness and symmetry so all the probability distributions can be classified into these two main categories.

To find the nature of distribution whether symmetric or skewed we must have to either draw the curve of the distribution or the coefficient of skewness with the help of absolute or relative measures.

highly skewed distribution

The modal or highest value of the distribution if differs from mean and median that gives the skewed distribution, if the highest value coincides with mean and median and equal then the distribution is symmetric distribution, the highly skewed distribution may be positive or negative. The skewed distribution modal value can be find out using the coefficient of skewness.

Negatively skewed distribution| which is a negatively skewed distribution

Negatively Skewed Distribution
Negatively skewed distribution

Any distribution in which the measure of central tendencies follows the order mean<median<mode and the coefficient of skewness in negative in the negatively skewed distribution, the negatively skewed distribution is also known as left skewed distribution because in negatively skewed distribution the tail of graph or plot of information is left.

image 66
negatively skewed distribution

The coefficient of skewness for the negatively skewed distribution can easily find out with the usual methods of finding the coefficients of skewness.

negatively skewed distribution example

If 150 students in an examination performed as given below then find the nature of skewness of the distribution

marks0-1010-2020-3030-4040-5050-6060-7070-80
freq124018012421412

Solution: To find the nature of skewness of distribution we have to calculate the coefficient of skewness for which we require mean, mode, median and standard deviation for the given information so for this we will calculate these with the help of the following table

class intervalfmid value
x
c.f.d’=(x-35)/10f*d’f*d’2
0-1012512-3-36108
10-20401552-2-80160
20-30182570-1-1818
30-4003570000
40-5012458211212
50-604255124284168
60-701465138342126
70-801275150448192
total=52total=784

so the measures will be

\\begin{array}{l} Median =\\mathrm{L}+\\frac{\\left(\\frac{\\mathrm{N}}{2}-\\mathrm{C}\\right)}{\\mathrm{f}} \\times \\mathrm{h}=40+\\frac{75-70}{10} \\times 10=45 \\\\Mean (\\overline{\\mathrm{x}})=\\mathrm{A}+\\frac{\\sum_{\\mathrm{i}=1}^{\\mathrm{k}} \\mathrm{fd}^{\\prime}}{\\mathrm{N}} \\times \\mathrm{h}=35+\\frac{52}{150} \\times 10=39.16 \\end{array}

and

\\begin{aligned}   Standard Deviation }(\\sigma) &=\\mathrm{h} \\times \\sqrt{\\frac{\\sum \\mathrm{fd}^{\\prime 2}}{\\mathrm{~N}}-\\left(\\frac{\\sum \\mathrm{fd}}{\\mathrm{N}}\\right)^{2}} \\\\ &=10 \\times \\sqrt{\\frac{784}{150}-\\left(\\frac{52}{150}\\right)^{2}} \\\\&=10 \\times \\sqrt{5.10}=22.38 \\end{aligned}

hence the coefficient of skewness for the distribution is

S_k=\\frac{3(Mean-Median)}{\\sigma} \\\\=\\frac{3(39.16-45)}{22.38}=-0.782

negatively skewed distribution mean median mode

In the negatively skewed distribution mean median mode is in ascending order which represents the tail on the left side of the curve of distribution, the measure of central tendencies mean median and mode for the negatively skewed distribution follows exactly the reverse pattern of positively skewed distribution. The curve of the negatively skewed distribution is also an inverse image of the positively skewed distribution. so Mean<median<mode in negatively skewed distribution.

negatively skewed distribution curve

The nature of the curve for the negatively skewed distribution curve is left-skewed without symmetry either in a histogram or continuous curve.

image 85
negatively skewed distribution curve
image 86
left-skewed

As symmetry is the measure to calculate the asymmetry present in the distribution, so the distribution curve of negatively skewed distribution shows the asymmetry present on the left side.

positively skewed normal distribution

The continuous distribution which is following the normal distribution curve including the asymmetry by gathering the information to the right tail shows the right-skewed curve asymmetric about the median following descending order in the central tendencies mean median and mode.

FAQs

Why chi square distribution is positively skewed

The chi-square distribution gives the values from zero to infinity and the curve of the distribution gathers the information in the right tail so it shows the right-skewed curve hence the chi-square distribution is a positively skewed distribution.

Is Poisson distribution positively skewed

Yes, Poisson distribution is a positively skewed distribution as the information scattered near the right tail so the nature of the plot is positively skewed

Why does negative binomial distribution always positively skew

The negative binomial distribution is always positively skewed because negative binomial distribution is the generalization of pascal distribution which is always positively skewed so is the negative binomial distribution.

Does skewness have any impact on linear regression models My dependent variable and my interaction variable is positively skewed

The impact on linear regression of the model having my dependent variable and my interaction skewed does not mean the regression error is also skewed and vice versa as the error is skewed does not mean the variables are skewed.

Hermite Polynomial: 9 Complete Quick Facts

image 145

  The Hermite polynomial is widely occurred in applications as an orthogonal function. Hermite polynomial is the series solution of Hermite differential equation.

Hermite’s Equation

    The differential equation of second order with specific coefficients as

d2y/dx2 – 2x dy/dx + 2xy = 0

is known as Hermite’s equation, by solving this differential equation we will get the polynomial which is Hermite Polynomial.

Let us find the solution of the equation

d2y/dx2 – 2x dy/dx + 2ny = 0

with the help of series solution of differential equation

101 1

now substituting all these values in the Hermite’s equation we have

image 136

This equation satisfies for the value of k=0 and as we assumed the value of k will not be negative, now for the lowest degree term xm-2 take k=0 in the first equation as the second gives negative value, so the coefficient xm-2 is

a0m (m-1)=0 ⇒ m=0,m=1

as a0 ≠ 0

now in the same way equating the coefficient of xm-1 from the second summation

104

and equating the coefficients of xm+k to zero,

ak+2(m+k+2)(m+k+1)-2ak(m+k-n) = 0

we can write it as

ak+2 = 2(m+k-n)/(m+k+2)(m+k+1) ak

if m=0

ak+2 = 2(k-n)/(k+2)(k+1) ak

if m=1

ak+2 = 2(k+1-n)/(k+3)(k+2) ak

for these two cases now we discuss the cases for k

When $m=0, ak+2= 2(k-n)/ (k+2)(k+1)} ak$

If, $k=0 a2 =-2 n/2 a0=-n a0$

$k=1, a3=2(1-n)/6 a1 =-2(n-1)/3 ! a1$

If $k=2, a4 =2(2-n)/12 a2 =2 (2-n)/12 (-n a0) =22 n(n-2)/4 ! a0$

108

so far m=0 we have two conditions when a1=0, then a3=a5=a7=….=a2r+1=0 and when a1 is not zero then

image 140

by following this put the values of a0,a1,a2,a3,a4 and a5 we have

image 141

and for m=1 a1=0 by putting k=0,1,2,3,….. we get

ak+2 = 2(k+1-n)/(k+3)(k+2)ak

image 142

so the solution will be

image 143

so the complete solution is

image 144

where A and B are the arbitrary constants

Hermite Polynomial

   The Hermite’s equation solution is of the form y(x)=Ay1(x)+By2(x) where y1(x) and y2(x) are the series terms as discussed above,

image 145
image 146

one of these series end if n is non negative integer if n is even y1 terminates otherwise y2 if n is odd, and we can easily verify that for n=0,1,2,3,4…….. these polynomials are

1,x,1-2x2, x-2/3 x3, 1-4x2+4/3x4, x-4/3x3+ 4/15x5

so we can say here that the solution of Hermite’s equation are constant multiple of these polynomials and the terms containing highest power of x is of the form 2nxn denoted by Hn(x) is known as Hermite polynomial

Generating function of Hermite polynomial

Hermite polynomial usually defined with the help of relation using generating function

image 150
image 149

[n/2] is the greatest integer less than or equal to n/2 so it follows the value of Hn(x) as

image 151
image 152

this shows that Hn(x) is a polynomial of degree n in x and

Hn(x) = 2nxn + πn-2 (x)

where πn-2 (x) is the polynomial of degree n-2 in x, and it will be even function of x for even value of n and odd function of x for odd value of n, so

Hn(-x) = (-1)n Hn(x)

some of the starting Hermite polynomials are

H0(x) = 1

H1(x) = 2x

H2(x) = 4x2 – 2

H3(x) = 8x3-12

H4(x) = 16x4 – 48x2+12

H5(x) = 32x2 – 160x3+120x

Generating function of Hermite polynomial by Rodrigue Formula

Hermite Polynomial can also be defined with the help of Rodrigue formula using generating function

image 153

since the relation of generating function

image 154

  Using the Maclaurin’s theorem, we have

image 155

or

by putting z=x-t and

for t=0,so z=x gives

this we can show in another way as

differentiating

with respect to t gives

taking limit t tends to zero

now differentiating with respect to x

taking limit t tends to zero

from these two expressions we can write

in the same way we can write

 differentiating n times put t=0, we get

from these values we can write

from these we can get the values

Example on Hermite Polynomial           

  1. Find the ordinary polynomial of

Solution: using the Hermite polynomial definition and the relations we have

2. Find the Hermite polynomial of the ordinary polynomial

Solution: The given equation we can convert to Hermite as

and from this equation equating the same powers coefficient

hence the Hermite polynomial will be

Orthogonality of Hermite Polynomial | Orthogonal property of Hermite Polynomial

The important characteristic for Hermite polynomial is its orthogonality which states that

To prove this orthogonality let us recall that

which is the generating function for the Hermite polynomial and we know

so multiplying these two equations we will get

multiplying and integrating within infinite limits

and since

so

using this value in above expression we have

which gives

now equate the coefficients on both the sides

which shows the orthogonal property of Hermite polynomial.

  The result of orthogonal property of Hermite polynomial can be shown in another way by considering the recurrence relation

Example on orthogonality of Hermite Polynomial

1.Evaluate the integral

Solution: By using the property of orthogonality of hermite polynomial

since the values here are m=3 and n=2 so

2. Evaluate the integral

Solution: Using the orthogonality property of Hermite polynomial we can write

Recurrence relations of Hermite polynomial

The value of Hermite polynomial can be easily find out by the recurrence relations

Hermite polynomial
Hermite polynomial recurrence relations

These relations can easily obtained with the help of definition and properties.

Proofs:1. We know the Hermite equation

y”-2xy’+2ny = 0

and the relation

image 174

by taking differentiation with respect to x partially we can write it as

image 175

from these two equations

image 176
image 177

now replace n by n-1

image 178
image 179

by equating the coefficient of tn

image 180
image 181

so the required result is

image 182

2. In the similar way differentiating partially with respect to t the equation

image 183

we get

image 184
image 185

n=0 will be vanished so by putting this value of e

image 186
image 187

now equating the coefficients of tn

image 188

thus

image 189

3. To prove this result we will eliminate Hn-1 from

image 190

and

image 191

so we get

image 192

thus we can write the result

image 193

4. To prove this result we differentiate

image 194

we get the relation

image 195

substituting the value

image 196

and replacing n by n+1

image 197

which gives

image 173

Examples on Recurrence relations of Hermite polynomial

1.Show that

H2n(0) = (-1)n. 22n (1/2)n

Solution:

To show the result we have

image 172

H2n(x) =

taking x=0 here we get

image 171

2. Show that

H’2n+1(0) = (-1)n 22n+1 (3/2)2

Solution:

Since from the recurrence relation

H’n(x) = 2nHn-1(X)

here replace n by 2n+1 so

H’2n-1(x) = 2(2n+1) H2n(x)

taking x=0

image 170

3. Find the value of

H2n+1(0)

Solution

Since we know

image 169

use x=0 here

H2n-1(0) = 0

4. Find the value of H’2n(0).

Solution :

we have the recurrence relation

H’n(x) = 2nHn-1(x)

here replace n by 2n

H’2n(x) = =2(2n)H2n-1(x)

put x=0

H’2n(0) = (4n)H2n-1(0) = 4n*0=0

5. Show the following result

image 168

Solution :

Using the recurrence relation

H’n(x) = 2nHn-1 (x)

so

image 167

and

d3/dx3 {Hn(x)} = 23n(n-1)(n-2)Hn-3(x)

differentiating this m times

image 166

which gives

image 165

6. Show that

Hn(-x) = (-1)n Hn(x)

Solution :

we can write

image 163
image 164

from the coefficient of tn we have

image 162

and for -x

image 161

7. Evaluate the integral and show

Solution : For solving this integral use integration parts as

image 160

Now differentiation under the Integral sign differentiate with

respect to x

image 159

using

H’n(x) = 2nHn-1 (x)

and

H’m(x) = 2mHm-1 (x)

we have

image 157

and since

???? n,m-1 = ????n+1, m

so the value of integral will be

image 156

Conclusion:

The specific polynomial which frequently occurs in application is Hermite polynomial, so the basic definition, generating function , recurrence relations and examples related to Hermite Polynomial were discussed in brief here   , if you require further reading go through

https://en.wikipedia.org/wiki/Hermite_polynomials

For more post on mathematics, please follow our Mathematics page

2D Coordinate Geometry: 11 Important Facts

Screenshot 59 300x212 1

Locus in 2D Coordinate Geometry

Locus is a Latin word. It is derived from the word ‘Place’ or ‘Location’. The Plural of locus is Loci.

Definition of Locus:

In Geometry, ‘Locus’ is a set of points which satisfy one or more specified conditions of a figure or shape. In modern mathematics, the location or the path on which a point moves on the plane satisfying given geometrical conditions, is called locus of the point.

Locus is defined for line, line segment and the regular or irregular curved shapes except the shapes having vertex or angles inside them in Geometry. https://en.wikipedia.org/wiki/Coordinate_system

Examples on Locus:

lines, circles, ellipse, parabola, hyperbola etc. all these geometrical shapes are defined by the locus of points.

Equation of the Locus:

The algebraic form of the geometrical properties or conditions which are satisfied by the coordinates of all the points on Locus, is known as the equation of the locus of those points.

Method of Obtaining the Equation of the Locus:

To find the equation of the locus of a moving point on a plane, follow the process described below

(i) First, assume the coordinates of a moving point on a plane be (h,k).

(ii) Second, derive a algebraic equation with h and k from the given geometrical conditions or properties.

(iii) Third, replace h and k by x and y respectively in the above said equation. Now this equation is called the the equation of the locus of the moving point on the plane. (x,y) is the current coordinates of the moving point and the equation of the locus must always be derived in the form of x and y i.e. current coordinates.

Here are some examples to make the conception clear about locus.

4+different types of solved problems on Locus:

Problem 1: If P be any point on the XY-plane which is equidistant from two given points A(3,2) and B(2,-1) on the same plane, then find the locus and the equation of locus of the point P with graph.

Solution: 

Locus
Graphical representation

Assume that the coordinates of any point on the locus of P on XY-plane are (h, k).

Since, P is equidistant from A and B, we can write

The distance of P from A=The distance of P from B

Or, |PA|=|PB|

lagrida latex editor 51
lagrida latex editor 46

Or, (h2 -6h+9+k2 -4k+4) = (h2 -4h+4+k2 +2k+1)——– taking square to both sides.

Or, h2 -6h+13+k2 -4k -h2+4h-5-k2 -2k = 0

Or, -2h -6k+8 = 0

Or, h+3k -4 = 0

Or, h+3k = 4 ——– (1)

This is a first degree equation of h and k.

Now if h and k are replaced by x and y then the equation (1) becomes the first degree equation of x and y in the form of x + 3y = 4 which represents a straight line.

Therefore, the locus of the point P(h, k) on XY-plane is a straight line and the equation of the locus is x + 3y = 4 . (Ans.)


Problem 2: If a point R moves on the XY-plane in such way that RA : RB = 3:2 where the coordinates of the points A and B are (-5,3) and (2,4) respectively on the same plane, then find the locus of the point R.

What type of curve does the equation of the locus of R indicate?

Solution: Lets assume that the coordinates of any point on the locus of given point R on XY-plane be (m, n).

Asper given condition RA : RB = 3:2,

we have,

(The distance of R from A) / (The distance of R from B) = 3/2

lagrida latex editor 47

Or, (m2 +10m+34+n2 -6n) / (m2 -4m+n2 -8n+20) =9/4 ———– taking square to both sides.

Or, 4(m2 +10m+34+n2 -6n) = 9(m2 -4m+n2 -8n+20)

Or, 4m2 +40m+136+4n2 -24n = 9m2 -36m+9n2 -72n+180)

Or, 4m2 +40m+136+4n2 -24n – 9m2 +36m-9n2 +72n-180 = 0

Or, -5m2 +76m-5n2+48n-44 = 0

Or, 5(m2+n2)-76m+48n+44 = 0 ———-(1)

This is a second degree equation of m and n .

Now if m and n are replaced by x and y, the equation (1) becomes the second degree equation of x and y in the form of 5(x2+y2)-76x+48y+44 = 0 where the coefficients of x2 and y2 are same and the coefficient of xy is zero. This equation represents a circle.

Therefore, the locus of the point R(m, n) on XY-plane is a circle and the equation of the locus is

5(x2+y2)-76x+48y+44 = 0 (Ans.)


Problem 3: For all values of (θ,aCosθ,bSinθ) are the coordinates a point P which moves on the XY plane. Find the equation of locus of P.

Solution: lets (h, k) be the coordinates of any point lying on the locus of P on XY-plane.

Then asper the question, we can say

h= a Cosθ

Or, h/a = Cosθ —————(1)

And k = b Sinθ

Or, k/b = Sinθ —————(2)

Now taking square of both the equations (1) and (2) and then adding, we have the equation

h2/a2 + k2/b2 =Cos2θ + Sin2θ

Or, h2/a2 + k2/b2 = 1 (Since Cos2θ + Sin2θ =1 in trigonometry)

Therefore the equation of locus of the point P is x2/a2 + y2/b2 = 1 . (Ans.)


Problem 4 : Find the equation of locus of a point Q, moving on the XY-plane, if the coordinates of Q are

lagrida latex editor 1 1

where u is the variable parameter.

Solution : Let the coordinates of any point on the locus of given point Q while moving on XY-plane be (h, k).

Then, h = lagrida latex editor 3and k = lagrida latex editor 2

i.e. h(3u+2) = 7u-2 and k(u-1) = 4u+5

i.e. (3h-7)u = -2h-2 and (k-4)u = 5+k

i.e. u =lagrida latex editor 4 —————(1)

and u = lagrida latex editor 5 —————(2)

Now equating the equations (1) and (2) , we get, lagrida latex editor 6

Or, (-2h-2)(k-4) = (3h-7)(5+k)

Or, -2hk+8h-2k+8 = 15h+3hk-35-7k

Or, -2hk+8h-2k-15h-3hk+7k = -35-8

Or, -5hk-7h+5k = -43

Or, 5hk+7h-5k = 43

Therefore, the equation of the locus of Q is 5xy+7x-5y = 43.


More examples on Locus with answers for practice by your own:

Problems 5: If θ be a variables and u be a constant, then find the equation of locus of the point of intersection of the two straight lines x Cosθ + y Sinθ = u and x Sinθ- y Cosθ = u. ( Ans. x2+y2 =2u2 )

Problems 6: Find the equation of locus of the middle point of the line segment of the straight line x Sinθ + y Cosθ = t between the axes. ( Ans. 1/x2+1/y2 =4/t2 )

Problems 7: If a point P is moving in such way on the XY-plane that the area of the triangle made by the point with two points (2,-1) and (3,4). ( Ans. 5x-y=11)


Basic Examples on the Formulae “Centroid of a Triangle”  in 2D Coordinate Geometry

Centroid: The three medians of a triangle always intersect at a point, located in the interior area of the triangle and divides the median at the ratio 2:1 from any vertex to the midpoint of the opposite side. This point is called the centroid of the triangle.   

Problems 1:  Find the centroid of the triangle with vertices (-1,0), (0,4) and (5,0).

Solution:  We already know,

                                             If  A(x1,y1) , B(x2,y2) and C(x3,y3) be the vertices of a Triangle and G(x, y) be the centroid of the triangle, then Coordinates of G are

lagrida latex editor 7

and

lagrida latex editor 8 1

Using this formula we have , 

(x1,y1) ≌(-1,0) i.e. x1= -1, y1=0 ;

(x2,y2) ≌(0,4) i.e.   x2=0, y2=4 and

(x3,y3) ≌(5,0)  i.e.   x3=5, y3=0

(See formulae chart)

Screenshot 17
Graphical Representation

So, the x-coordinate of the centroid G,   lagrida latex editor 9

i.e. lagrida latex editor 10

i.e. x=4/3

                  and 

the y-coordinate of the centroid G,  lagrida latex editor 11

i.e lagrida latex editor 12

i.e y=4/3

Therefore, the coordinates of the centroid of the given triangle is lagrida latex editor 13 . (Ans)

More answered problems are given below for further practice using the procedure described in above problem 1 :-

Problems 2: Find the coordinates of the centroid of the triangle with vertices at the points (-3,-1), (-1,3)) and (1,1).

Ans. (-1,1)

Problems 3: What is the x-coordinate of the centroid of the triangle with vertices (5,2), (10,4) and (6,-1) ?

Ans.

Problems 4: Three vertices of a triangle are (5,9), (2,15) and (11,12).Find the centroid of this triangle.

Ans. (6,12)


Shifting of Origin / Translation of Axes- 2D Co-ordinate Geometry

Shifting of Origin means to shift the Origin to a new point keeping the orientation of the axes unchanged i.e the new axes remain parallel to the original axes in the same plane. By this translation of axes or shifting of origin process many problems on algebraic equation of a geometric shape are simplified and solved easily.

The formula of ” Shifting of Origin” or “Translation of Axes” are described below with graphical representation.

Formula:

If O be the origin ,P(x,y) be any point in the XY plane and O be  shifted to another point O′(a,b) against which the coordinates of the point P become (x1,y1) in the same plane with new axes X1Y1  ,Then New Coordinates of P are

x1 = x- a

y1 = y- b

Graphical representation for clarification: Follow the graphs

Screenshot 45
Screenshot 46

Few solved Problems on the formula of ‘Shifting of Origin’ :

Problem-1 : If there are two points (3,1) and (5,4) in the same plane and the origin is shifted to the point (3,1) keeping the new axes parallel to the original axes, then find the co-ordinates of the point (5,4) in respect with the new origin and axes.

Solution: Comparing with the formula of ‘Shifting of Origin’ described above , we have new Origin, O′(a, b) ≌ (3,1) i.e. a=3 , b=1 and the required point P, (x, y) ≌ (5,4) i.e. x=5 , y=4

Screenshot 52

Now if (x1,y1) be the new coordinates of the point P(5,4) ,then asper formula x1 = x-a and y1 =y-b,

we get, x1 = 5-3 and y1 =4-1

i.e. x1 = 2 and y1 =3

Therefore, the required new coordinates of the point (5,4) is (2,3) . (Ans.)

Problem-2 : After shifting the Origin to a point in the same plane ,remaining the axes parallel to each other ,the coordinates of a point (5,-4) become (4,-5).Find the Coordinates of new Origin.

Solution: Here using the formula of ‘Shifting the Origin’ or ‘Translation of Axes’ , we can say the coordinates of the point P with respect to old and new Origin and axes respectively are (x, y) ≌ (5,-4) i.e. x=5 , y= -4 and (x1,y1) ≌ (4,-5) i.e.  x1= 4, y1= -5

Screenshot 50

Now we have to find the coordinates of the new Origin O′(a, b) i.e. a=?, b=?

Asper formula,

x1 = x- a

y1 = y- b

i.e. a=x-x1 and b=y-y1

Or, a=5-4 and b= -4-(-5)

Or, a=1 and b= -4+5

Or, a=1 and b= 1

Therefore, O'(1,1) be the new Origin i.e. the coordinates of the new Origin are (1,1). (Ans.)

Basic Examples on the Formulae “Collinearity of points (three points)” in 2D Coordinate Geometry

Problems 1:  Check whether the points (1,0), (0,0) and (-1,0) are collinear or not.

Solution:  We already know,

                                            If  A(x1,y1) , B(x2,y2) and C(x3,y3) be any three collinear points, then the area of the triangle made by them must be zero i.e the area of the triangle is ½[x1 (y2– y3) + x2 (y3– y1) + x3 (y1-y2)] =0

(See formulae chart)

Using this formula we have ,

(x1,y1) ≌(-1,0) i.e.   x1= -1, y1= 0   ;

(x2,y2) ≌(0,0)  i.e.   x2= 0, y2= 0;

(x3,y3) ≌(1,0)  i.e.    x3= 1, y3= 0

Screenshot 14
Graphical Representation

So, the area of the triangle is = |½[x1 (y2  y3) + x2 (y3  y1) + x3 (y1-y2)]| i.e.

(L.H.S) = |½[-1 (0-0) + 0 (0-0) + 1 (0-0)]|

= |½[(- 1)x0 + 0x0 + 1×0]|

= |½[0 + 0 + 0]|

= |½ x 0|

= 0  (R.H.S)

Therefore, the area  of the triangle made by those given points become zero which means they are lying on the same line.

Therefore, the given points are collinear points. (Ans)

More answered problems are given below for further practice using the procedure described in the above problem 1 :-

Problems 2: Check whether the points (-1,-1), (0,0) and (1,1) are  collinear or not.

Ans. Yes

Problems 3: Is it possible to draw one line through three points (-3,2), (5,-3) and (2,2) ?

Ans.No

Problems 4: Check whether the points (1,2), (3,2) and (-5,2),connected by lines, can form a triangle in the coordinate plane.

Ans. No

______________________________

Basic Examples on the Formulae “Incenter of a Triangle” in 2D Coordinate Geometry

Incenter:It is the center of the triangle’s largest incircle which fits inside the triangle.It is also the point of intersection of the three bisectors of the interior angles of the triangle.

Problems 1: The vertices of a triangle with sides  are (-2,0), (0,5) and (6,0) respectively. Find the incenter of the triangle.

Solution: We already know,

If  A(x1,y1) , B(x2,y2) and C(x3,y3) be the vertices, BC=a, CA=b and AB=c , G′(x,y) be the incentre of the triangle,

The co-ordinates of G′ are

lagrida latex editor 14 1

and         

lagrida latex editor 15 1

(See formulae chart)

Screenshot 56

Asper the formula we have,

(x1,y1) ≌(-4,0) i.e.  x1= -4, y1=0 ;

(x2,y2) ≌(0,3) i.e.  x2=0, y2=3 ;

(x3,y3) ≌(0,0)  i.e.   x3=0, y3=0

We have now,

a= √ [(x2-x1)2+(y2-y1)2 ]

Or, a= √ [(0+4)2+(3-0)2 ]

Or, a= √ [(4)2+(3)2 ]

Or, a= √ (16+9)

Or, a= √25

Or, a = 5 ——————(1)

b=√ [(x1-x3)2+(y1-y3)2 ]

Or, b= √ [(-4-0)2+(0-0)2 ]

Or, b= √ [(-4)2+(0)2 ]

Or, b= √ (16+0)

Or, b= √16

Or, b= 4 ——————–(2)

c= √ [(x3-x2)2+(y3-y2)2 ]

Or, c= √ [(0-0)2+(0-3)2 ]

Or, c= √ [(0)2+(-3)2 ]

Or, c= √ (0+9)

Or, c= √9

Or, c= 3 ——————–(3)

and ax1+ bx2 + cx3 = (5 X (-4)) + (4 X 0) + (3 X 6 )

= -20+0+18

Or, ax1+ bx2 + cx3 = -2 ——————-(4)

ay1+ by2+ cy3 = (5 X 0) + (4 X 3) + (3 X 0)

= 0+12+0

Or, ay1+ by2+ cy3 = 12 ——————–(5)

a+b+c = 5+4+3

Or, a+b+c = 12 ——————(6)

Using the above equations (1), (2), (3), (4), (5) and (6) we can calculate the value of x and y from

lagrida latex editor 16 1

Or, x = -2/12

Or, x = -1/6

and

lagrida latex editor 17 1

Or, y = 12/12

Or, y = 1

Therefore the required coordinates of the incenter of the given triangle are (-1/6 , 1). (Ans.)

More answered problems are given below for further practice using the procedure described in above problem 1 :-

Problems 2: Find the coordinates of the incenter of the triangle with vertices at the points (-3,-1), (-1,3)) and (1,1).

Problems 3: What is the x-coordinate of the incenter of the triangle with vertices (0,2), (0,0) and (0,-1) ?

Problems 4: Three vertices of a triangle are (1,1), (2,2) and (3,3). Find the incenter of this triangle.


Point Sections Or Ratio Formulae: 41 Critical Solutions

Basic Examples on the Formulae “Point sections or Ratio”

Case-I

Problems 21:  Find the coordinates of the point P(x, y) which internally divides the line segment joining the two points (1,1) and (4,1) in the ratio 1:2.

Solution:   We already know,

If a point P(x, y) divides the line segment AB internally in the ratio m:n,where coordinates of A and B are (x1,y1) and (x2,y2) respectively. Then Coordinates of P are 

gif

and

gif

(See formulae chart)

Using this formula we can say , (x1,y1) ≌(1,1) i.e.   x1=1, y1=1   ;

(x2,y2)≌(4,1)   i.e.   x2=4, y2=1   

and

m:n  ≌ 1:2     i.e   m=1,n=2

Screenshot 4
Graphical Representation

Therefore,       

x  =

gif

( putting values of m & n in   

gif

Or, x  =1*4+2*1/3 ( putting values of x1 &  x2 too )

Or, x  = 4+2/3

Or, x  = 6*3

 Or, x = 2

Similarly we get,  

y =

gif

( putting values of m & n in     y  =

gif

Or, y =(1*1+2*1)/3 ( putting values of y1 &  y2 too )

Or, y = 1*1+2/3

Or, y  =  3/3

Or, y = 1

 Therefore, x=2 and y=1 are the coordinates of the point P i.e. (2,1).   (Ans)

More answered problems are given below for further practice using the procedure described in above problem 21:-

Problem 22: Find the coordinates of the point  which internally divides the line segment joining the two points (0,5) and (0,0) in the ratio 2:3.

                     Ans. (0,2)

Problem 23: Find the point which internally divides the line segment joining the points (1,1) and (4,1) in the ratio 2:1.

Ans. (3,1)

Problem 24: Find the point which lies on the line segment joining the two points (3,5,) and (3,-5,)  dividing it in the ratio 1:1

Ans. (3,0)

Problem 25: Find the coordinates of the point which internally divides the line segment joining the two points (-4,1) and (4,1) in the ratio 3:5

Ans. (-1,1)

Problem 26: Find the point which internally divides the line segment joining the two points (-10,2) and (10,2) in the ratio 1.5 : 2.5.

_____________________________

Case-II

Problems 27:   Find the coordinates of the point Q(x,y) which externally divides the line segment joining the two points (2,1) and (6,1) in the ratio 3:1.

Solution:  We already know,

If a point Q(x,y) divides the line segment AB externally in the ratio m:n,where coordinates of A and B are (x1,y1) and (x2,y2) respectively,then the coordinates of the point P are 

gif

and

gif

(See formulae chart)

Using this formula we can say ,  (x1,y1) ≌(2,1)  i.e.  x1=2, y1=1   ;

                                                    (x2,y2)≌(6,1)  i.e.   x2=6, y2=1    and   

                                                    m:n  ≌ 3:1    i.e.    m=3,n=1   

Point sections
Graphical Representation

Therefore, 

x =

gif

( putting values of m & n in     x  =

gif

Or, x =(3*6)-(1*2)/2 ( putting values of x1 &  x2 too )

Or, x18-2/2

Or, x  =16/2

Or, x = 8

Similarly we get,  

y =

gif

( putting values of m & n in     y  =

gif

Or, y  =

gif

( putting values of y1 &  y2 too )

Or, y = 3-1/2

Or, y  =  2/2

Or, y = 1

 Therefore, x=8 and y=1 are the coordinates of the point Q i.e. (8,1).   (Ans)

More answered problems are given below for further practice using the procedure described in above problem 27:-

Problem 28: Find the point which externally divides the line segment joining the two points (2,2) and (4,2) in the ratio 3 : 1.

Ans. (5,2)

Problem 29: Find the point which externally divides the line segment joining the two points (0,2) and (0,5) in the ratio 5:2.

Ans. (0,7)

Problem 30: Find the point which lies on the extended part of the line segment joining the two points (-3,-2) and (3,-2) in the ratio 2 : 1.

Ans. (9,-2)

________________________________

Case-III

Problems 31:  Find the coordinates of the midpoint  of the line segment joining the two points (-1,2) and (1,2).

Solution:   We already know,

If a point R(x,y) be the midpoint of the line segment  joining A(x1,y1) and B(x2,y2) .Then coordinates of R are

gif

and

gif

(See formulae chart)

Case-III is the form of case-I while m=1 and n=1

Using this formula we can say ,  (x1,y1) ≌(-1,2)  i.e.  x1=-1, y1=2   and

                                                    (x2,y2)≌(1,2)  i.e.   x2=1, y2=2

Screenshot 11
Graphical Representation

Therefore,

x =

gif

( putting values of x1 &  x2  in x =

gif

Or, x  = 0/2

Or, x = 0

Similarly we get, 

y =2+2/2 ( putting values of y1 &  y2  in y =

gif

Or, y 4/2

Or, y = 2

Therefore, x=0 and y=2 are the coordinates of the midpoint R i.e. (0,2).   (Ans)

More answered problems are given below for further practice using the procedure described in above problem 31:-

Problem 32: Find the coordinates of the midpoint of the line joining the two points (-1,-3) and (1,-4).

Ans. (0,3.5)

Problem 33: Find the coordinates of the midpoint  which divides the line segment joining the two points (-5,-7) and (5,7).

Ans. (0,0)

Problem 34: Find the coordinates of the midpoint  which divides the line segment joining the two points (10,-5) and (-7,2).

Ans. (1.5, -1.5)

Problem 35: Find the coordinates of the midpoint  which divides the line segment joining the two points (3,√2) and (1,32).

Ans. (2,2√2)

Problem 36: Find the coordinates of the midpoint  which divides the line segment joining the two points (2+3i,5) and (2-3i,-5).

Ans. (2,0)

Note: How to check if a point divides a line (length=d units) internally or externally by the ratio m:n

If ( m×d)/(m+n)  +   ( n×d)/(m+n)  = d , then internally dividing and

If ( m×d)/(m+n)  –   ( n×d)/(m+n)  = d , then externally dividing

____________________________________________________________________________

Basic Examples on the Formulae “Area of a Triangle”

Case-I 

Problems 37: What is the area of the triangle with two vertices A(1,2) and B(5,3) and height with respect to AB be 3 units in the coordinate plane ?

 Solution:   We already know,

If “h” be the height and “b” be the base of Triangle, then  Area of the Triangle is  = ½ ×  b ×  h

(See formulae chart)

image?w=366&h=269&rev=57&ac=1&parent=1Ug0lE5AOAhO4i0HE5fVqVUKTEbR0on8yfNNyWgAF Po
Graphical Representation

Using this formula we can say , 

 h = 3 units and b = √

(x<sub>2</sub>-x<sub>1</sub>)<sup>2</sup>+(y<sub>2</sub>-y<sub>1</sub>)<sup>2 </sup>

i.e  

(5-1)<sup>2</sup>+(3-2)<sup>2 </sup>

                    Or, b = √

(4)<sup>2</sup>+(1)<sup>2 </sup>

                    Or, b = √

(16+1<sup> </sup>

                    Or,  b = √ 17  units

Therefore, the required area of the triangle is   = ½ ×  b ×  h    i.e

= ½ × (√ 17 )  ×  3 units

= 3⁄2 × (√ 17 )   units   (Ans.)

______________________________________________________________________________________

Case-II

Problems 38:What is the area of the triangle with vertices A(1,2), B(5,3) and C(3,5) in the coordinate plane ?

 Solution:   We already know,

If  A(x1,y1) , B(x2,y2) and C(x3,y3) be the vertices of a Triangle,

Area of the Triangle is  =|½

</strong><strong>x</strong><strong><sub>1</sub></strong><strong> (y</strong><strong><sub>2</sub></strong><strong>-</strong><strong><sub> </sub></strong><strong> y</strong><strong><sub>3</sub></strong><strong>) + x</strong><strong><sub>2</sub></strong><strong> (y</strong><strong><sub>3</sub></strong><strong>-</strong><strong><sub> </sub></strong><strong> y</strong><strong><sub>2</sub></strong><strong>) + x</strong><strong><sub>3</sub></strong><strong> (y</strong><strong><sub>2</sub></strong><strong>- y</strong><strong><sub>1</sub></strong><strong>)</strong><strong>

|

(See formulae chart)

Using this formula we have , 

                                              (x1,y1) ≌(1,2) i.e.   x1=1, y1=2   ;

                                              (x2,y2) ≌(5,3)  i.e.   x2=5, y2=3 and

                                              (x3,y3) ≌(3,5)  i.e.    x3=3, y3=5

image?w=364&h=194&rev=207&ac=1&parent=1Ug0lE5AOAhO4i0HE5fVqVUKTEbR0on8yfNNyWgAF Po
Graphical Representation

Therefore, the area of the triangle is =

x<sub>1</sub> (y<sub>2</sub>-<sub> </sub> y<sub>3</sub>) + x<sub>2</sub> (y<sub>3</sub>-<sub> </sub> y<sub>1</sub>) + x<sub>3</sub> (y<sub>1</sub>-y<sub>2</sub>)

| i.e 

=

1 (3-5) + 5 (5-3) + 3 (3-2)

sq.units 

=

1x (-2) +  (5×2) + (3×1)

|    sq.units

=

-2 + 10 + 3

|    sq.units

= x 11|     sq.units

= 11⁄2     sq.units

= 5.5      sq.units         (Ans.)

More answered problems are given below for further practice using the procedure described in above problems :-

Problem 39: Find the area of the triangle whose vertices are (1,1), (-1,2) and (3,2).

Ans. 2 sq.units

Problem 40: Find the area of the triangle whose vertices are (3,0), (0,6) and (6,9).

Ans. 22.5 sq.units

Problem 41: Find the area of the triangle whose vertices are (-1,-2), (0,4) and (1,-3).

Ans. 6.5 sq.units

Problem 42: Find the area of the triangle whose vertices are (-5,0,), (0,5) and (0,-5).                                 Ans. 25 sq.units

 _______________________________________________________________________________________

For more post on Mathematics, please follow our Mathematics page.

Covariance, Variance Of Sums: 7 Important Facts

COVARIANCE, VARIANCE OF SUMS, AND CORRELATIONS OF RANDOM VARIABLES

  The statistical parameters of the random variables of different nature using the definition of expectation of random variable is easy to obtain and understand, in the following we will find some parameters with the help of mathematical expectation of random variable.

Moments of the number of events that occur

    So far we know that expectation of different powers of random variable is the moments of random variables and how to find the expectation of random variable from the events if number of event occurred already, now we are interested in the expectation if pair of number of events already occurred, now if X represents the number of event occurred then for the events A1, A2, ….,An define the indicator variable Ii as

gif

the expectation of X in discrete sense will be

gif

because the random variable X is

gif

now to find expectation if number of pair of event occurred already we have to use combination as

gif

this gives expectation as

gif
gif

from this we get the expectation of x square and the value of variance also by

gif

By using this discussion we focus different kinds of random variable to find such moments.

Moments of binomial random variables

   If p is the probability of success from n independent trials then lets denote Ai for the trial i as success so

gif
gif
gif
gif

and hence the variance of binomial random variable will be

gif

because

gif

if we generalize for k events

gif
gif

this expectation we can obtain successively for the value of k greater than 3  let us find for 3

gif
gif

gif
gif

using this iteration we can get

gif

Moments of hypergeometric random variables

  The moments of this random variable we will understand with the help of an example suppose n pens are randomly selected from a box containing N pens of which m are blue, Let Ai denote the events that i-th pen is blue, Now X is the number of blue pen selected is equal to the number of events A1,A2,…..,An that occur because the ith pen selected is equally likely to any of the N pens of which m are blue

gif

and so

A %7Bi%7D%29%20%3D%5Cfrac%7Bm%7D%7BN%7D%20%5Cfrac%7Bm 1%7D%7BN 1%7D
gif
gif

this gives

gif

so the variance of hypergeometric random variable will be

gif
gifgif

in similar way for the higher moments

gif
gif

hence

gif

Moments of the negative hypergeometric random variables

  consider the example of a package containing n+m vaccines of which n are special and m are ordinary, these vaccines removed one at a time, with each new removal equally likely to be any of the vaccine that remain in the package. Now let random variable Y denote the number of vaccines that need to be withdrawn until a total of r special vaccines have been removed, which is negative hypergeometric distribution, this is somehow similar with negative binomial to binomial as to hypergeometric distribution. to find the probability mass function if the kth draw gives the special vaccine after k-1 draw gives r-1 special and k-r ordinary vaccine

gif

now the random variable Y

Y=r+X

for the events Ai

gif
gif

as

gif

hence to find the variance of Y we must know the variance of X so

gif
gif
gif
gif

hence

gif

COVARIANCE             

The relationship between two random variable can be represented by the statistical parameter covariance, before the definition of covariance of two random variable X and Y recall that the expectation of two functions g and h of random variables X and Y respectively gives

gif
gif
gif
gif
gif

using this relation of expectation we can define covariance as

   “ The covariance between random variable X and random variable Y denoted by cov(X,Y)  is defined as

gif

using definition of expectation and expanding we get

gif
gifgif

it is clear that if the random variables X and Y are independent then

gif
gif

but the converse is not true for example if

gif

and defining the random variable Y as

gif

so

gif

here clearly X and Y are not independent but covariance is zero.

Properties of covariance

  Covariance between random variables X and Y has some properties as follows

gif
gif
gif
gif

using the definition off the covariance the first three properties are immediate and the fourth property follows by considering

em%3E%7Bj%3D1%7D%5E%7Bm%7D%20Y %7Bj%7D%20%5Cright%20%5D%20%3D%5Csum %7Bj%3D1%7D%5E%7Bm%7D%20v %7Bj%7D

now by definition

covariance

Variance of the sums

The important result from these properties is

gif

as

gif
gif
gif
gif

If Xi ‘s are pairwise independent then

Example: Variance of a binomial random variable

  If X is the random variable

gif

where Xi are the independent Bernoulli random variables such that

gif

 then find the variance of a binomial random variable X with parameters n and p.

Solution:

since

gif
gif

so for single variable we have

gif
gif
gif

so the variance is

gif

Example

  For the independent random variables Xi with the respective means and variance and a new random variable with deviation as

gif

then compute

gif

solution:

By using the above property and definition we have

gif
gif
gif

now for the random variable S

COVARIANCE

take the expectation

gif

Example:

Find the covariance of indicator functions for the events A and B.

Solution:

for the events A and B the indicator functions are

gif
gif

so the expectation of these are

gif
gif
gif
gif

thus the covariance is

gif
B%29%20 %20P%28A%29%5D

Example:

     Show that

gif

where Xi are independent random variables with variance.

Solution:

The covariance using the properties and definition will be

gif
gif
gif
gif

Example:

  Calculate the mean and variance of random variable S which is the sum of n sampled values if set of N people each of whom has an opinion about a certain subject that is measured by a real number v that represents the person’s “strength of feeling” about the subject. Let  represent the strength of feeling of person  which is unknown, to collect information a sample of n from N is taken randomly, these n people are questioned and their feeling is obtained to calculate vi

Solution

let us define the indicator function as

gif

thus we can express S as

gif

and its expectation as

gif

this gives the variance as

gif
gif

since

gif
gif

we have

gif
gif
gif
gif
gif

we know the identity

gif

so

gif
gif
gif
gif

so the mean and variance for the said random variable will be

gif
gif

Conclusion:

The correlation between two random variables is defined as covariance and using the covariance the sum of the variance is obtained for different random variables, the covariance and different moments with the help of definition of expectation is obtained  , if you require further reading go through

https://en.wikipedia.org/wiki/Expectation

A first course in probability by Sheldon Ross

Schaum’s Outlines of Probability and Statistics

An introduction to probability and statistics by ROHATGI and SALEH.

For more post on mathematics, please follow our Mathematics page

11 Facts On Mathematical Expectation & Random Variable

2 1

Mathematical Expectation and random variable    

     The mathematical expectation plays very important role in the probability theory, the basic definition and basic properties of mathematical expectation already we discussed in previous some articles now after discussing the various distributions and types of distributions, in the following article we will get familiar with some more advanced properties of mathematical expectation.

Expectation of sum of random variables | Expectation of function of random variables | Expectation of Joint probability distribution

     We know the mathematical expectation of random variable of discrete nature is

2 1
2.0 Copy

and for the continuous one is

3.0 Copy

now for the random variable X and Y if discrete then with the joint probability mass function p(x,y)

expectation of function of random variable X and Y will be

4.0

and if continuous then with the joint probability density function f(x, y) the expectation of function of random variable X and Y will be

5.0

if g is addition of these two random variables in continuous form the

6.0
7.0
8.0
9.0

and if for the random variables X and Y we have

X>Y

then the expectation also

10.0 1

Example

A Covid-19 hospital is uniformly distributed on the road of the length L at a point X, a vehicle carrying oxygen for the patients is at a location Y which is also uniformly distributed on the road, Find the expected distance between Covid-19 hospital and oxygen carrying vehicle if they are independent.

Solution:

To find the expected distance between X and Y we have to calculate E { | X-Y | }

Now the joint density function of X and Y will be

11.0 1

since

12.0 1

by following this we have

13.0 1

now the value of integral will be

14.0
15.0
16.0

Thus the expected distance between these two points will be

17.0

Expectation of Sample mean

  As the sample mean of the sequence of random variables X1, X2, ………, Xn with distribution function F and expected value of each as μ is

18.0

so the expectation of this sample mean will be

19.0
20.0
71.0
22.0

which shows the expected value of sample mean is also μ.

Boole’s Inequality

                Boole’s inequality can be obtained with the help of properties of expectations, suppose the random variable X defined as

23.0 1

where

24.0

here Ai ‘s are the random events, this means random variable X represents the occurrence of the number of events Ai and another random variable Y as

25.0

clearly

X>=Y

E[X] >= E[Y]

and so is

now if we take the value of random variable X and Y these expectation will be

28.0

and

29.0

substituting these expectation in the above inequality we will get Boole’s inequality as

30.0

Expectation of Binomial random variable | Mean of Binomial random variable

  We know that the binomial random variable is the random variable which shows number of successes in n independent trials with probability of success as p and failure as q=1-p, so if

X=X1 + X2+ …….+ Xn

Where

31.0

here these Xi ‘s are the Bernoulli and the expectation will be

32.0

so the expectation of X will be

33.0

Expectation of Negative binomial random variable | Mean of Negative binomial random variable

  Let a random variable X which represents the number of trials needed to collect r successes, then such a random variable is known as negative binomial random variable and it can be expressed as

34.0

here each Xi denote the number of trials required after the (i-1)st success  to obtain the total of i successes.

Since each of these Xi represent the geometric random variable and we know the expectation for the geometric random variable is

35.0

so

36.0

which is the expectation of negative binomial random variable.

Expectation of hypergeometric random variable | Mean of hypergeometric random variable

The expectation or mean of the hypergeometric random variable we will obtain with the help of a simple real life example, if n number of books are randomly selected from a shelf containing N books of which m are of mathematics, then to find the expected number of mathematics books let X denote the number of mathematics books selected then we can write X as

37.0

where

38.0

so

39.0
40.0

=n/N

which gives

41.0

which is the mean of such a hypergeometric random variable.

Expected number of matches

   This is very popular problem related to expectation, suppose that in a room there are N number of people who throw their hats in the middle of the room  and all the hats are mixed  after that each person randomly choose one hat then the expected number of people who select their own hat we can obtain by letting X to be the number of matches so

42.0

Where

43.0

since each person has equal opportunity to select any of the hat from N hats then

44.0

so

45.0

which means exactly one person on average choose his own hat.

The probability of a union of events

     Let us obtain the probability of the union of the events with the help of expectation so for the events Ai

46.0

with this we take

47.0

so the expectation of this will be

48.0

and expanding using expectation property as

49.0

since we have

Mathematical Expectation
Mathematical Expectation: The probability of a union of events

and

51.0

so

52.0

this implies the probability of union as

52.0 1

Bounds from Expectation using Probabilistic method

    Suppose S be a finite set and f is the function on the elements of S and

53.0

here we can obtain the lower bound for this m by expectation of f(s) where “s” is any random element of S whose expectation we can calculate so

54.0
55.0 1

here we get expectation as the lower bound for the maximum value

Maximum-Minimum identity

 Maximum Minimum identity is the maximum of the set of numbers to the minimums of the subsets of these numbers that is for any numbers xi

56.0 1

To show this let us restrict the xi within the interval [0,1], suppose a uniform random variable U on the interval (0,1) and the events Ai as the uniform variable U is less than xi that is

57.0

since at least one of the above event occur as U is less than one the value of xi

58.0

and

59.0

Clearly we know

60.0

and all the events will occur if U is less than all the variables and

62.0 1

the probability gives

62.0

we have the result of probability of union as

63.0

following this inclusion exclusion formula for the probability

64.0

consider

65.0

this gives

66.0

since

67.0

which means

68.0
  • hence we can write it as
69.0

taking expectation we can find expected values of maximum and partial minimums as

70.0

Conclusion:

The Expectation in terms of various distribution and correlation of expectation with some of the probability theory concepts were the focus of this article which shows the use of expectation as a tool to get expected values of different kind of random variables, if you require further reading go through below books.

For more articles on Mathematics, please see our Mathematics page.

https://en.wikipedia.org/wiki/Expectation

A first course in probability by Sheldon Ross

Schaum’s Outlines of Probability and Statistics

An introduction to probability and statistics by ROHATGI and SALEH

Conditional Distribution: 7 Interesting Facts To Know

9.PNG

Conditional distribution

   It is very interesting to discuss the conditional case of distribution when two random variables follows the distribution satisfying one given another, we first briefly see the conditional distribution in both the case of random variables, discrete and continuous then after studying some prerequisites we focus on the conditional expectations.

Discrete conditional distribution

     With the help of joint probability mass function in joint distribution we define conditional distribution for the discrete random variables X and Y using conditional probability for X given Y as the distribution with the probability mass function

1
2.PNG
3.PNG

provided the denominator probability is greater than zero, in similar we can write this as

4.PNG
5.PNG

in the joint probability if the X and Y are independent random variables then this will turn into

6.PNG
7.PNG
8.PNG

so the discrete conditional distribution or conditional distribution for the discrete random variables X given Y is the random variable with the above probability mass function in similar way for Y given X we can define.

Example on discrete conditional distribution

  1. Find the probability mass function of random variable X given Y=1, if the joint probability mass function for the random variables X and Y has some values as

p(0,0)=0.4 , p(0,1)=0.2, p(1,0)= 0.1, p(1,1)=0.3

Now first of all for the value Y=1 we have

9.PNG

so using the definition of probability mass function

10.PNG
11.PNG
12.PNG

we have

13.PNG

and

14.PNG
  • obtain the conditional distribution of X given X+Y=n, where X and Y are Poisson distributions with the parameters λ1 and λ2 and X and Y are independent random variables

Since the random variables X and Y are independent, so the conditional distribution will have probability mass function as

15.PNG
16.PNG
17.PNG

since the sum of Poisson random variable is again poisson so

18.PNG
19.PNG
20.PNG

thus the conditional distribution with above probability mass function will be conditional distribution for such Poisson distributions. The above case can be generalize for more than two random variables.

Continuous conditional distribution

   The Continuous conditional distribution of the random variable X given y already defined is the continuous distribution with the probability density function

21.PNG

denominator density is greater than zero, which  for the continuous density function is

22.PNG
23.PNG

thus the probability for such conditional density function is

24.PNG

In similar way as in discrete if X and Y are independent  in continuous then also

25.PNG

and hence

px 26
px 28 Copy 1

so we can write it as

px 29 Copy 1

Example on Continuous conditional distribution

  1. Calculate conditional density function of random variable X given Y if the joint probability density function with the open interval (0,1) is given by
px 30 Copy 1

If for the random variable X given Y within (0,1) then by using the above density function we have

px 31
px 32
px 33
px 34
px 35
  • Calculate the conditional probability
px 36

if the joint probability density function is given by

px 37

To find the conditional probability first we require the conditional density function so by the definition it would be

px 38
px 39
px 40

now using this density function in the probability the conditional probability is

100
101
px 41

Conditional distribution of bivariate normal distribution

  We know that the Bivariate normal distribution of the normal random variables X and Y with the respective means and variances as the parameters has the joint probability density function

Conditional distribution
Conditional distribution of bivariate normal distribution

so to find the conditional distribution for such a bivariate normal distribution for X given Y is defined by following the conditional density function of the continuous random variable and the above joint density function we have

Conditional distribution
Conditional distribution of bivariate normal distribution

By observing this we can say that this is normally distributed with the mean

px 42

and variance

px 43

in the similar way the conditional density function for Y given X already defined will be just interchanging the positions of the parameters of X with Y,

The marginal density function for X we can obtain from the above conditional density function by using the value of the constant

Conditional distribution
Conditional distribution of bivariate normal distribution

let us substitute in the integral

px 44

the density function will be now

Image3 1

since the total value of

Image4

by the definition of the probability so the density function will be now

Image5

which is nothing but the density function of random variable X with usual mean and variance as the parameters.

Joint Probability distribution of function of random variables

  So far we know the joint probability distribution of two random variables, now if we have functions of such random variables then what would be the joint probability distribution of those functions, how to calculate the density and distribution function because we have real life situations where we have functions of the random variables,

If Y1 and Y2 are the functions of the random variables X1 and X2 respectively which are jointly continuous then the joint continuous density function of these two functions will be

px 45

where Jacobian

px 46

and Y1 =g1 (X1, X2) and Y2 =g2 (X1, X2) for some functions g1 and g2 . Here g1 and g2 satisfies the conditions of the Jacobian as continuous and have continuous partial derivatives.

Now the probability for such functions of random variables will be

Image7

Examples on Joint Probability distribution of function of random variables

  1. Find the joint density function of the random variables Y1 =X1 +X2 and Y2=X1 -X2 , where X1 and X2 are the jointly continuous with joint probability density function. also discuss for the different nature of distribution .

Here we first we will check Jacobian

px 47

since g1(x1, x2)= x1 + x2  and g2(x1, x2)= x1 – x2 so

px 48

simplifying Y1 =X1 +X2 and Y2=X1 -X2 , for the value of X1 =1/2( Y1 +Y2 ) and X2 = Y1 -Y2 ,

px 49

if these random variables are independent uniform random variables

px 50

or if these random variables are independent exponential random variables with usual parameters

Image10

or if these random variables are independent normal random variables then

px 51
px 52
px 53
  • If X and Y are the independent standard normal variables as given
Conditional distribution

calculate the joint distribution for the respective polar coordinates.

We will convert by usual conversion X and Y into r and θ as

px 54

so the partial derivatives of these function will be

px 55
px 56
px 57
px 58

so the Jacobian using this functions is

px 59

if both the random variables X and Y are greater than zero then conditional joint density function is

px 60

now the conversion of cartesian coordinate to the polar coordinate using

px 61

so the probability density function for the positive values will be

px 62

for the different combinations of X and Y the density functions in similar ways are

px 63
px 64
px 65

now from the average of the above densities we can state the density function as

px 66

and the marginal density function from this joint density of polar coordinates over the interval (0, 2π)

px 67
  • Find the joint density function for the function of random variables

U=X+Y and V=X/(X+Y)

where X and Y are the gamma distribution with parameters (α + λ) and (β +λ) respectively.

Using the definition of gamma distribution and joint distribution function the density function for the random variable X and Y will be

px 68
px 69

consider the given functions as

g1 (x,y) =x+y , g2 (x,y) =x/(x+y),

so the differentiation of these function is

px 70
px 71
px 72

now the Jacobian is

px 73

after simplifying the given equations the variables x=uv and y=u(1-v) the probability density function is

px 74
px 75

we can use the relation

px 76
px 77
  • Calculate the joint probability density function for

Y1 =X1 +X2+ X3 , Y2 =X1– X2 , Y3 =X1 – X3

where the random variables X1 , X2, X3 are the standard normal random variables.

Now let us calculate the Jacobian by using partial derivatives of

Y1 =X1 +X2+ X3 , Y2 =X1– X2 , Y3 =X1 – X3

as

px 78

simplifying for variables X1 , X2 and X3

X1 = (Y1 + Y2 + Y3)/3 , X2 = (Y1 – 2Y2 + Y3)/3 , X3 = (Y1 + Y2 -2 Y3)/3

we can generalize the joint density function as

px 79

so we have

px 80

for the normal variable the  joint probability density function is

px 81

hence

px 82

where the index is

px 83
px 84

compute the joint density function of Y1 ……Yn and marginal density function for Yn where

px 85

and Xi are independent identically distributed exponential random variables with parameter λ.

for the random variables of the form

Y1 =X1 , Y2 =X1 + X2 , ……, Yn =X1 + ……+ Xn

the Jacobian will be of the form

Image11

and hence its value is one, and the joint density function for the exponential random variable

px 86

and the values of the variable Xi ‘s will be

px 87

so the joint density function is

px 88
px 89
px 90
px 91

Now to find the marginal density function of Yn we will integrate one by one  as

px 92
px 93

and

px 94 1
px 94 2

like wise

px 96

if we continue this process we will get

px 97

which is the marginal density function.

Conclusion:

The conditional distribution for the discrete and continuous random variable with different examples considering some of the types of these random variables discussed, where the independent random variable plays important role. In addition the  joint distribution for the function of joint continuous random variables also explained with suitable examples, if you require further reading go through below links.

For more post on Mathematics, please refer to our Mathematics Page

Wikipediahttps://en.wikipedia.org/wiki/joint_probability_distribution/” target=”_blank” rel=”noreferrer noopener” class=”rank-math-link”>Wikipedia.org

A first course in probability by Sheldon Ross

Schaum’s Outlines of Probability and Statistics

An introduction to probability and statistics by ROHATGI and SALEH

Jointly Distributed Random Variables: 11 Important Facts

Content

Jointly distributed random variables

     The jointly distributed random variables are the random variable more than one with probability jointly distributed for these random variables, in other words in experiments where the different outcome with their common probability is known as jointly distributed random variable or joint distribution, such type of situation occurs frequently while dealing the problems of the chances.

Joint distribution function | Joint Cumulative probability distribution function | joint probability mass function | joint probability density function

    For the random variables X and Y the distribution function or joint cumulative distribution function is

gif

where the nature of the joint probability depends on the nature of random variables X and Y either discrete or continuous, and the individual distribution functions for X and Y can be obtained using this joint cumulative distribution function as

gif

similarly for Y as

gif

these individual distribution functions of X and Y are known as Marginal distribution functions when joint distribution is under consideration. These distributions are very helpful for getting the probabilities like

and in addition the joint probability mass function for the random variables X and Y is defined as

gif

the individual probability mass or density functions for X and Y can be obtained with the help of such joint probability mass or density function like in terms of discrete random variables as

gif

and in terms of continuous random variable the joint probability density function will be

gif

where C is any two dimensional plane, and the joint distribution function for continuous random variable will be

image 60

the probability density function from this distribution function can be obtained by differentiating

gif

and the marginal probability from the joint probability density function

gif

as

gif

and

gif

with respect to the random variables X and Y respectively

Examples on Joint distribution

  1. The joint probabilities for the random variables X and Y representing the number of mathematics and statistics books from a set of books which contains 3 mathematics, 4 statistics and 5 physics books if 3 books taken randomly
%5Cbinom%7B12%7D%7B3%7D%3D%5Cfrac%7B1%7D%7B220%7D
  • Find the joint probability mass function for the sample of families having 15% no child, 20% 1 child, 35% 2 child and 30% 3 child if the family we choose randomly from this sample for child to be Boy or Girl?

The joint probability we will find by using the definition as

Jointly distributed random variables
Jointly distributed random variables : Example

and this we can illustrate in the tabular form as follows

Jointly distributed random variables
Jointly distributed random variables : Example of joint distribution
  • Calculate the probabilities
gif

if for the random variables X and Y the joint probability density function is given by

gif

with the help of definition of joint probability for continuous random variable

gif

and the given joint density function the first probability for the given range will be

gif
gif
gif
gif

in the similar way the probability

gif
gif
gif
gif

and finally

gif
gif
gif
  • Find the joint density function for the quotient X/Y of random variables X and Y if their joint probability density function is
gif

To find the probability density function for the function X/Y we first find the joint distribution function then we will differentiate the obtained result,

so by the definition of joint distribution function and given probability density function we have

%7BY%7D%28a%29%3DP%20%7B%20%5Cfrac%7BX%7D%7BY%7D%5Cleq%20a%20%7D
gif
gif
gif
gif

thus by differentiating this distribution function with respect to a we will get the density function as

gif

where a is within zero to infinity.

Independent random variables and joint distribution

     In the joint distribution the probability for two random variable X and Y is said to be independent if

gif

where A and B are the real sets. As already in terms of events we know that the independent random variables are the random variables whose events are independent.

Thus for any values of a and b

gif

and the joint distribution or cumulative distribution function for the independent random variables X and Y will be

gif

if we consider the discrete random variables X and Y then

gif

since

gif
gif
gif
gif

similarly for the continuous random variable also

gif

Example of independent joint distribution

  1. If for a specific day in a hospital the patients entered are poisson distributed with parameter λ and probability of male patient as p and probability of female patient as (1-p) then show that the number of male patients and female patients entered in the hospital are independent poisson random variables with parameters λp and λ(1-p) ?

consider the number of male and female patients by random variable X and Y then

gif
gif

as X+Y are the total number of patients entered in the hospital which is poisson distributed so

gif

as the probability of male patient is p and female patient is (1-p) so exactly from total fix number are male or female shows binomial probability as

gif

using these two values we will get the above joint probability as

gif
gif
gif

thus probability of male and female patients will be

gif
gif

and

gif

which shows both of them are poisson random variables with the parameters λp and λ(1-p).

2. find the probability that a person has to wait for more than ten minutes at the meeting for a client as if each client and that person arrives between  12 to 1 pm following uniform distribution.

consider the random variables X and Y to denote the time for that person and client between 12 to 1 so the probability jointly for X and Y will be

image 61
gif
gif
gif
gif

calculate

gif

where X,Y and Z are uniform random variable over the interval (0,1).

here the probability will be

gif

for the uniform distribution the density function

gif

for the given range so

gif
gif
gif
gif

SUMS OF INDEPENDENT RANDOM VARIABLES BY JOINT DISTRIBUTION

  The sum of independent variables X and Y with the probability density functions as continuous random variables, the cumulative distribution function will be

gif
gif
gif
gif

by differentiating this cumulative distribution function for the probability density function of these independent sums are

latex%5Dfty%7D%20F %7BX%7D%20%28a y%29%20f %7BY%7D%28y%29dy
gif
gif

by following these two results we will see some continuous random variables and their sum as independent variables

sum of independent uniform random variables

   for the random variables X and Y uniformly distributed over the interval (0,1) the probability density function for both of these independent variable is

gif

so for the sum X+Y we have

gif

for any value a lies between zero and one

gif

if we restrict a in between one and two it will be

gif

this gives the triangular shape density function

gif

if we generalize for the n independent uniform random variables 1 to n then their distribution function

by mathematical induction will be

gif

sum of independent Gamma random variables

    If we have two independent gamma random variables with their usual density function

gif

then following the density for the sum of independent gamma random variables

gif
gif
gif
gif
gif

this shows the density function for the sum of gamma random variables which are independent

sum of independent exponential random variables

    In the similar way as gamma random variable the sum of independent exponential random variables we can obtain density function and distribution function by just specifically assigning values of gamma random variables.

Sum of independent normal random variable | sum of independent Normal distribution

                If we have n number of independent normal random variables Xi , i=1,2,3,4….n with respective means μi and variances σ2i then their sum is also normal random variable with the mean as Σμi  and variances Σσ2i

    We first show the normally distributed independent sum for two normal random variable X with the parameters 0 and σ2 and Y with the parameters 0 and 1, let us find the probability density function for the sum X+Y with

gif

in the joint distribution density function

gif

with the help of definition of density function of normal distribution

gif
gif

thus the density function will be

gif
gif
gif

which is nothing but the density function of a normal distribution with mean 0 and variance (1+σ2) following the same argument we can say

em%3E%7B2%7D

with usual mean and variances. If we take the expansion and observe the sum is normally distributed with the mean as the sum of the respective means and variance as the sum of the respective variances,

thus in the same way the nth sum will be the normally distributed random variable with the mean as Σμi  and variances Σσ2i

Sums of independent Poisson random variables

If we have two independent Poisson random variables X and Y with parameters λ1 and λ2 then their sum X+Y is also Poisson random variable or Poisson distributed

since X and Y are Poisson distributed and we can write their sum as the union of disjoint events so

gif
gif
em%3E%7B2%7D%5E%7Bn k%7D%7D%7B%28n k%29%21%7D

by using the of probability of independent random variables

em%3E%7B2%7D%5E%7Bn k%7D%7D%7Bk%21%28n k%29%21%7D
em%3E%7B2%7D%5E%7Bn k%7D
em%3E%7B2%7D%29%5E%7Bn%7D

so we get the sum X+Y is also Poisson distributed with the mean λ12

Sums of independent binomial random variables

                If we have two independent binomial random variables X and Y with parameters (n,p) and (m, p) then their sum X+Y is also binomial random variable or Binomial distributed with parameter (n+m, p)

let use the probability of the sum with definition of binomial as

gif
gif
gif
gif
gif

which gives

gif

so the sum X+Y is also binomially distributed with parameter (n+m, p).

Conclusion:

The concept of jointly distributed random variables which gives the distribution comparatively for more than one variable in the situation is discussed in addition the basic concept of independent random variable with the help of joint distribution and sum of independent variables with some example of distribution is given with their parameters, if you require further reading go through mentioned books. For more post on mathematics, please click here.

https://en.wikipedia.org

A first course in probability by Sheldon Ross

Schaum’s Outlines of Probability and Statistics

An introduction to probability and statistics by ROHATGI and SALEH

Gamma Distribution Exponential Family: 21 Important Facts

Content

  1. Special form of Gamma distributions and relationships of Gamma distribution
  2. Gamma distribution exponential family
  3. Relationship between gamma and normal distribution
  4. Poisson gamma distribution | poisson gamma distribution negative binomial
  5. Weibull gamma distribution
  6. Application of gamma distribution in real life | gamma distribution uses | application of gamma distribution in statistics 
  7. Beta gamma distribution | relationship between gamma and beta distribution
  8. Bivariate gamma distribution
  9. Double gamma distribution
  10. Relation between gamma and exponential distribution | exponential and gamma distribution | gamma exponential distribution
  11. Fit gamma distribution
  12. Shifted gamma distribution
  13. Truncated gamma distribution
  14. Survival function of gamma distribution
  15. MLE of gamma distribution | maximum likelihood gamma distribution | likelihood function of gamma distribution
  16. Gamma distribution parameter estimation method of moments | method of moments estimator gamma distribution
  17. Confidence interval for gamma distribution
  18. Gamma distribution conjugate prior for exponential distribution | gamma prior distribution | posterior distribution poisson gamma
  19. Gamma distribution quantile function
  20. Generalized gamma distribution
  21. Beta generalized gamma distribution

Special form of Gamma distributions and relationships of Gamma distribution

  In this article we will discuss the special forms of gamma distributions and the relationships of gamma distribution with different continuous and discrete random variables also some estimation methods  in sampling of population using gamma distribution is briefly discuss.

Gamma distribution exponential family

  The gamma distribution exponential family and it is two parameter exponential family which is largely and applicable family of distribution as most of real life problems can be modelled in the gamma distribution exponential family and the quick and useful calculation  within the exponential family can be done easily, in the two parameter if we take probability density function as

x%7Dx%5E%7B%5Calpha%20

if we restrict the known value of α (alpha) this two parameter family will reduce to one parameter exponential family

x%7D a%20%5C%20%5C%20log%5Clambda%20%5Cfrac%7Bx%5E%7B%5Calpha%20

and for λ (lambda)

gif

Relationship between gamma and normal distribution

  In the probability density function of gamma distribution if we take alpha nearer to 50 we will get the nature of density function as

Gamma distribution exponential family
Gamma distribution exponential family

even the shape parameter in gamma distribution we are increasing which is resulting in similarity of normal distribution normal curve, if we tend shape parameter alpha tends to infinity the gamma distribution will be more symmetric and normal but as alpha tends to infinity value of x in gamma distribution will tends to minus infinity which result in semi infinite support of gamma distribution infinite hence even gamma distribution becomes symmetric but not same with normal distribution.

poisson gamma distribution | poisson gamma distribution negative binomial

   The poisson gamma distribution and binomial distribution are the discrete random variable whose random variable deals with the discrete values specifically success and failure in the form of Bernoulli trials which gives random success or failure as a result only, now the mixture of Poisson and gamma distribution also known as negative binomial distribution is the outcome of the repeated trial of Bernoulli’s trial, this can be parameterize in different way as if r-th success occurs in number of trials then it can be parameterize as

gif

and if the number of failures before the r-th success then it can be parameterize as

gif

and considering the values of r and p

gif
gif

the general form of the parameterization for the negative binomial or poisson gamma distribution is

gif.latex?P%28X%3Dx%29%3D%5Cbinom%7Bx+r 1%7D%7Bx%7Dp%5E%7Br%7D%281

and alternative one is

gif.latex?P%28X%3Dx%29%3D%5Cbinom%7Bx+r

this binomial distribution is known as negative because of the coefficient

gif.latex?%5Cbinom%7Bx+r 1%7D%7Bx%7D%20%3D%5Cfrac%7B%28x+r 1%29%28x+r 2%29...r%7D%7Bx%21%7D%20%5C%20%3D%20%28 1%29%5E%7Bx%7D%5Cfrac%7B%28 r %28x 1%29%29%28 r %28x 2%29%29...%28 r%29%7D%7Bx%21%7D%20%5C%20%3D%20%28 1%29%5E%7Bx%7D%5Cfrac%7B%28 r%29%28 r 1%29..

and this negative binomial or poisson gamma distribution is well define as the total probability we will get as one for this distribution

gif

The mean and variance for this negative binomial or poisson gamma distribution is

gif
gif

the poisson and gamma relation we can get by the following calculation

%5Cbeta%20%7D%20d%5Clambda
%5Cbeta%20%29%7Dd%5Clambda
gif
gif

Thus negative binomial is the mixture of poisson and gamma distribution and this distribution is used in day to day problems modelling where discrete and continuous mixture we require.

Gamma distribution exponential family
Gamma distribution exponential family

Weibull gamma distribution

   There are generalization of exponential distribution which involve Weibull as well as gamma distribution as the Weibull distribution has the probability density function as

gif

and cumulative distribution function as

gif

where as pdf and cdf of gamma distribution is already we discussed above the main connection between Weibull and gamma distribution is both are generalization of exponential distribution the difference between them is when power of variable is greater than one then Weibull distribution gives quick result while for less than 1 gamma gives quick result.

     We will not discuss here generalized Weibull gamma distribution that require separate discussion.

application of gamma distribution in real life | gamma distribution uses | application of gamma distribution in statistics 

  There are number of  application where gamma distribution is used to model the situation such as insurance claim to aggregate, rainfall amount accumulation, for any product its manufacturing and distribution, the crowd on specific web,  and in telecom exchange etc. actually the gamma distribution give the wait time prediction till next event for nth event. There are number of application of gamma distribution in real life.

beta gamma distribution | relationship between gamma and beta distribution

    The beta distribution is the random variable with the probability density function

gif

where

gif

which has the relationship with gamma function as

gif

and beta distribution related to gamma distribution as if X be gamma distribution with parameter alpha and beta as one and Y be the gamma distribution with parameter alpha as one and beta then the random variable X/(X+Y) is beta distribution.

or If X is Gamma(α,1) and Y is Gamma (1, β) then the random variable X/(X+Y) is Beta (α, β) 

and also

gif

bivariate gamma distribution

     A two dimensional or bivariate random variable is continuous if there exists a function f(x,y) such that the joint distribution function

gif

where

gif
gif

and the joint probability density function obtained by

gif

there are number of bivariate gamma distribution one of them is the bivariate gamma distribution with probability density function as

gif

double gamma distribution

  Double gamma distribution is one of the bivariate distribution with gamma random variables having parameter alpha and one with joint probability density function as

em%3E%7B2%7D%29%7Dy %7B1%7D%5E%7B%5Calpha %7B1%7D%20 1%7Dy %7B2%7D%5E%7B%5Calpha %7B2%7D%20 1%7D%20exp%28 y %7B1%7D%20 y %7B2%7D%29%2C%20y %7B1%7D%26gt%3B%200%2C%20y %7B2%7D%26gt%3B%200

this density forms the double gamma distribution with respective random variables and the moment generating function for double gamma distribution is

em%3E%7B2%7D%7D%20%7D

relation between gamma and exponential distribution | exponential and gamma distribution | gamma exponential distribution

   since the exponential distribution is the distribution with the probability density function

and the gamma distribution has the probability density function

clearly the value of alpha if we put as one we will get the exponential distribution, that is the gamma distribution is nothing but the generalization of the exponential distribution, which predict the wait time till the occurrence of next nth event while exponential distribution predict the wait time till the occurrence of the next event.

fit gamma distribution

   As far as fitting the given data in the form of gamma distribution imply finding the two parameter probability density function which involve shape, location and scale parameters so finding these parameters with different application and calculating the mean, variance, standard deviation and moment generating function is the fitting of gamma distribution, since different real life problems will be modelled in gamma distribution so the information as per situation must be fit in gamma distribution for this purpose various technique in various environment is already there e.g in R, Matlab, excel etc.

shifted gamma distribution

     There are as per application and need whenever the requirement of shifting the distribution required from two parameter gamma distribution the new generalized three parameter or any another generalized gamma distribution shift the shape location and scale , such gamma distribution is known as shifted gamma distribution

truncated gamma distribution

     If we restrict the range or domain of the gamma distribution for the shape scale and location parameters the restricted gamma distribution is known as truncated gamma distribution based on the conditions.

survival function of gamma distribution

                The survival function for the gamma distribution is defined the function s(x) as follows

gif

mle of gamma distribution | maximum likelihood gamma distribution | likelihood function of gamma distribution

we know that the maximum likelihood take the sample from the population as a representative and this sample consider as an estimator for the probability density function to maximize for the parameters of density function, before going to gamma distribution recall some basics as for the random variable X the probability density function with theta as parameter has likelihood function as

this we can express as

and method of maximizing this likelihood function can be

if such theta satisfy this equation, and as log is monotone function we can write in terms of log

and such a supremum exists if

em%3E%7Bk%7D%29

now we apply the maximum likelihood for the gamma distribution function as

gif

the log likelihood of the function will be

gif

so is

gif

and hence

gif

This can be achieved also as

gif.latex?%5Ctextbf%7BL%7D%28%5Calpha%20%2C%5Cbeta%20%7C%20x%29%3D%5Cleft%20%28%20%5Cfrac%7B%5Cbeta%20%5E%7B%5Calpha%20%7D%7D%7B%5CGamma%20%28%5Calpha%20%29%7D%20x %7B1%7D%5E%7B%5Calpha%20 1%7D%20e%5E%7B %5Cbeta%20x %7B1%7D%7D%20%5Cright%20%29...%5Cleft%20%28%20%5Cfrac%7B%5Cbeta%20%5E%7B%5Calpha%20%7D%7D%7B%5CGamma%20%28%5Calpha%20%29%7D%20x %7Bn%7D%5E%7B%5Calpha%20 1%7D%20e%5E%7B %5Cbeta%20x %7Bn%7D%7D%20%5Cright%20%29%20%3D%5Cleft%20%28%20%5Cfrac%7B%5Cbeta%20%5E%7B%5Calpha%20%7D%7D%7B%5CGamma%20%28%5Calpha%20%29%7D%20%5Cright%29%5E%7Bn%7D%20%28x %7B1%7D%20%28x %7B2%7D...%28x %7Bn%7D%29%5E%7B%5Calpha%20 1%7D%20e%5E%7B

by

gif

and the parameter can be obtained by differentiating

gif
gif
gif

gamma distribution parameter estimation method of moments | method of moments estimator gamma distribution

   We can calculate the moments of the population and sample with the help of expectation of nth order respectively, the method of moment equates these moments of distribution and sample to estimate the parameters, suppose we have sample of gamma random variable with the probability density function as

gif

we know the first tow moments for this probability density function is

em%3E%7B2%7D%3D%5Cfrac%7B%5Calpha%20%28%5Calpha%20+1%29%20%7D%7B%5Clambda%20%5E%7B2%7D%7D

so

gif

we will get from the second moment if we substitute lambda

em%3E%7B1%7D%5E%7B2%7D%7D%3D%5Cfrac%7B%5Calpha%20+1%7D%7B%5Calpha%20%7D

and from this value of alpha is

em%3E%7B2%7D %5Cmu%20 %7B1%7D%5E%7B2%7D%7D

and now lambda will be

em%3E%7B2%7D %5Cmu%20 %7B1%7D%5E%7B2%7D%7D

and moment estimator using sample will be

gif

confidence interval for gamma distribution

   confidence interval for gamma distribution is the way to estimate the information and its uncertainty which tells the interval is expected to have the true value of the parameter at what percent, this confidence interval is obtained from the observations of random variables, since it is obtained from random it itself is random to get the confidence interval for the gamma distribution there are different techniques in different application that we have to follow.

gamma distribution conjugate prior for exponential distribution | gamma prior distribution | posterior distribution poisson gamma

     The posterior and prior distribution  are the terminologies of Bayesian probability theory and they are conjugate to each other, any two distributions are conjugate if the posterior of one distribution is another distribution, in terms of theta let us show that gamma distribution is conjugate prior to the exponential distribution

if the probability density function of gamma distribution in terms of theta is as

gif

assume the distribution function for theta is exponential from given data

gif

so the joint distribution will be

gif

and using the relation

gif

we have

gif
gif
gif

which is

gif

so gamma distribution is conjugate prior to exponential distribution as posterior is gamma distribution.

gamma distribution quantile function

   Qauntile function of gamma distribution will be the function that gives the points in gamma distribution which relate the rank order of the values in gamma distribution, this require cumulative distribution function and for different language different algorithm and functions for the quantile of gamma distribution.

generalized gamma distribution

    As gamma distribution itself is the generalization of exponential family of distribution adding more parameters to this distribution gives us generalized gamma distribution which is the further generalization of this distribution family, the physical requirements gives different generalization one of the frequent one is using the probability density function as

gif

the cumulative distribution function for such generalized gamma distribution can be obtained by

gif

where the numerator represents the incomplete gamma function as

em%3E%7B0%7D%5E%7B%5Cinfty%7Dt%5E%7Ba 1%7De%5E%7B t%7Ddt

using this incomplete gamma function the survival function for the generalized gamma distribution can be obtained as

gif

another version of this three parameter generalized gamma distribution having probability density function is

gif

where k, β, θ are the parameters greater than zero, these generalization has convergence issues to overcome the Weibull parameters replaces

using this parameterization the convergence of the density function obtained so the more generalization for the gamma distribution with convergence is the distribution with probability density function as

gif.latex?F%28x%29%20%3D%20%5Cbegin%7Bcases%7D%20%5Cfrac%7B%7C%5Clambda%20%7C%7D%7B%5Csigma%20.t%7D.%5Cfrac%7B1%7D%7B%5CGamma%20%5Cleft%20%28%20%5Cfrac%7B1%7D%7B%5Clambda%20%5E%7B2%7D%7D%20%5Cright%20%29%7D.e%5Cleft%20%5B%20%5Cfrac%7B%5Clambda%20.%5Cfrac%7BIn%28t%29 %5Cmu%20%7D%7B%5Csigma%20%7D+In%5Cleft%20%28%20%5Cfrac%7B1%7D%7B%5Clambda%20%5E%7B2%7D%7D%20%5Cright%20%29 e%5E%7B%5Clambda.%5Cfrac%7BIn.%28t%29

Beta generalized gamma distribution

   The gamma distribution involving the parameter beta in the density function because of which sometimes gamma distribution is known as the beta generalized gamma distribution with the density function

gif
gif

with cumulative distribution function as

gif

which is already discussed in detail in the discussion of gamma distribution, the further beta generalized gamma distribution is defined with the cdf as

gif

where B(a,b) is the beta function , and the probability density function for this can be obtained by differentiation and the density function will be

gif

here the G(x) is the above defined cumulative distribution function of gamma distribution, if we put this value then the cumulative distribution function of beta generalized gamma distribution is

%5CGamma%20%28%5Cbeta%20%29%7D%7D%5Comega%20%5E%7Ba 1%7D%20%281 %5Comega%20%29%5E%7Bb 1%7D%20d%5Comega

and the probability density function

gif

the remaining properties can be extended for this beta generalized gamma distribution with usual definitions.

Conclusion:

There are different forms and generalization of gamma distribution and Gamma distribution exponential family as per the real life situations so possible such forms and generalizations were covered in addition with the estimation  methods of gamma distribution in population sampling of information, if you require further reading on Gamma distribution exponential family, please go through below link and books. For more topics on Mathematics please visit our page.

https://en.wikipedia.org/wiki/Gamma_distribution

A first course in probability by Sheldon Ross

Schaum’s Outlines of Probability and Statistics

An introduction to probability and statistics by ROHATGI and SALEH