Normal distribution is skewed with zero skewness, so the answer to the most common confusion can normaldistribution be skewed is normal distribution is not skewed distribution as the curve of the normal distribution is symmetric without tail whose skewness is zero. The normal distribution curve is bell shaped with symmetry on the curve.
Since the skewness is lack of symmetry in the curve so if the symmetry is present in the curve there is lack of skewness.
How do you tell if the data is normally distributed?
For the data to check whether normally distributed or not just try to sketch the histogram and from the curve of the curve if the symmetry is present in the curve then the data is normally distributed, from the curve of data itself the question can normal distribution be skewed or not cleared if the concept of skewness is clear. Sketching the histogram or curve in each case is tedious or time consuming so instead of that their are number of statistical tests like Anderson-Darling statistic (AD) which are more useful to tell whether data is normally distributed or not.
The data which follows normal distribution have zero skewness in the curve and the characteristics of the curve of the skewed distribution is different without symmetry, this we will understand with the following example:
Example: Find the percent of score lies between 70 to 80 if the score of mathematics of university students are normally distributed with the mean 67 and standard deviation 9?
Solution:
To find the percent of score we follow the probability for the normal distribution discussed earlier in normal distribution, so to do so first we will convert into normal variate and follow the table discussed in normal distribution to find the probability using the conversion
Z=(X-μ)/σ
we want to find the score percent between 70 and 80 so we use random variable values 70 and 80 with the given mean 67 and standard deviation 9 this gives
Z=70-67/9 = 0.333
and
Z=80-67/9 = 1.444
This we can sketch as
the above shaded area shows the region between z=0.333 and z=1.444 from the table of standard normal variate the probabilities are
P(z > 0.333)=0.3707 and P(z > 1.444)=0.0749 so p(0.333 < z0.333)-P(z > 1.444)=0.3707-0.0749=0.2958
so 29.58% students will score between 70 to 80 .
In the above example the skewness of the curve is zero and the curve is symmetric, to check the data is normally distributed or not we have to perform the hypothesis tests.
How do you tell if a distribution is skewed left or right?
The distribution is known to be skewed if it is right tailed or left tailed in the curve so the depending on the nature of the curve we can judge whether the distribution is positive skewed or negative skewed. The concept of skewness is discussed in detail in the articles positively and negatively skewed distribution. If the symmetry in the left side lacks the distribution is skewed left and if the symmetry lacks in the right side the distribution is skewed right. The best way to check the distribution is skewed is to check the variation in the central tendencies that is if mean<median<mode then the distribution is left skewed and if mean>median>mode then the distribution is right skewed. The geometrical representation is as follows
The measures to calculate the skewness left or right for the information given in detail in the article of skewness.
What is an acceptable skewness?
Since the skewness as earlier discussed is lack of symmetry so what range is acceptable that must be clear. The question can normal distribution is skewed arise to check whether in the normal distribution is acceptable or not and the answer of the acceptable skewness is in normal distribution because in normal distribution the skewness is zero and the distribution in which skewness is near to zero is more acceptable. So after the testing for skewness if the skewness is nearer to zero then the skewness is acceptable depending on the requirement and range for the client.
In brief the acceptable skewness is the skewness which is nearer to zero as per the requirement.
How skewed is too skewed?
The skewness is the statistical measurement to check the symmetry present in the curve of the distribution and the information and all the measures to check skewness is present or not, depending on that we can find if the distribution is far from zero then too skewed or symmetry is zero then we can say the distribution is too skewed.
How do you determine normal distribution?
To determine the distribution is normal or not we have to look the distribution have the symmetry or not if the symmetry is present and the skewness is zero then the distribution is normal distribution, the detail methods and techniques were already discussed in detail in normal distribution
Do outliers skew data?
In the distribution data if any data follow unusual way and very far or away from the usual data that is known as outlier and in most of the cases the outliers are responsible for the skewness of the distribution and because of the unusual nature of outliers the distribution have skewness, so we can say that in the distribution the outliers skew data. The outliers in all cases will not skew data they skewed data only if they also follow the systematic sequence in continuous distribution to give left or right tailed curve.
In the previous articles the detail discussion of normal distribution and skewed distribution discussed.
Skewed Distribution | skewed distribution definition
The distribution in which symmetry is not present and the curve of the distribution shows tail either left or right side is known as skewed distribution, so skewness is the asymmetry present in the curve or histogram apart from the symmetric or normal curve.
depending on the measure of central tendencies the nature of the distribution whether skewed or not can be evaluated there is special relations between mean, mode and median in left-tailed or right-tailed skewed distribution.
normal distribution vs skewed | normal vs skewed distribution
Normal distribution
skewed distribution
In Normal distribution the curve is symmetric
In skewed distribution the curve is not symmetric
The measure of central tendencies mean, mode and median are equal
The measure of central tendencies mean, mode and median are not equal
mean=median =mode
mean>median>mode or mean<median<mode
Normal distribution vs skewed distribution
skewed distribution examples in real life
skewed distribution occurs in number of real life situation like the ticket sale of the particular show or movies in different months, record of athletes performance in competition, stock market returns, real estate rates fluctuation, life cycle of specific species, income variation, exam score and many more competitive outcomes. The distribution curve which shows asymmetry occurs frequently in applications.
difference between symmetrical and skewed distribution | symmetrical and skewed distribution
The main difference between the symmetrical distributions and skewed distribution is the differences between the central tendencies mean median and mode and in addition as the name suggest in the symmetrical distribution the curve of distribution is symmetric while in the skewed distribution the curve is not symmetric but have the skewness and it may be right-tailed or left tailed or may be both tailed also, the different distribution differs only on the nature of the skewness and symmetry so all the probability distributions can be classified into these two main categories.
To find the nature of distribution whether symmetric or skewed we must have to either draw the curve of the distribution or the coefficient of skewness with the help of absolute or relative measures.
highly skewed distribution
The modal or highest value of the distribution if differs from mean and median that gives the skewed distribution, if the highest value coincides with mean and median and equal then the distribution is symmetric distribution, the highly skewed distribution may be positive or negative. The skewed distribution modal value can be find out using the coefficient of skewness.
Negatively skewed distribution| which is a negatively skewed distribution
Any distribution in which the measure of central tendencies follows the order mean<median<mode and the coefficient of skewness in negative in the negatively skewed distribution, the negatively skewed distribution is also known as left skewed distribution because in negatively skewed distribution the tail of graph or plot of information is left.
The coefficient of skewness for the negatively skewed distribution can easily find out with the usual methods of finding the coefficients of skewness.
negatively skewed distribution example
If 150 students in an examination performed as given below then find the nature of skewness of the distribution
marks
0-10
10-20
20-30
30-40
40-50
50-60
60-70
70-80
freq
12
40
18
0
12
42
14
12
Solution: To find the nature of skewness of distribution we have to calculate the coefficient of skewness for which we require mean, mode, median and standard deviation for the given information so for this we will calculate these with the help of the following table
class interval
f
mid value x
c.f.
d’=(x-35)/10
f*d’
f*d’2
0-10
12
5
12
-3
-36
108
10-20
40
15
52
-2
-80
160
20-30
18
25
70
-1
-18
18
30-40
0
35
70
0
0
0
40-50
12
45
82
1
12
12
50-60
42
55
124
2
84
168
60-70
14
65
138
3
42
126
70-80
12
75
150
4
48
192
total=52
total=784
so the measures will be
and
hence the coefficient of skewness for the distribution is
negatively skewed distribution mean median mode
In the negatively skewed distribution mean median mode is in ascending order which represents the tail on the left side of the curve of distribution, the measure of central tendencies mean median and mode for the negatively skewed distribution follows exactly the reverse pattern of positively skewed distribution. The curve of the negatively skewed distribution is also an inverse image of the positively skewed distribution. so Mean<median<mode in negatively skewed distribution.
negatively skewed distribution curve
The nature of the curve for the negatively skewed distribution curve is left-skewed without symmetry either in a histogram or continuous curve.
As symmetry is the measure to calculate the asymmetry present in the distribution, so the distribution curve of negatively skewed distribution shows the asymmetry present on the left side.
positively skewed normal distribution
The continuous distribution which is following the normal distribution curve including the asymmetry by gathering the information to the right tail shows the right-skewed curve asymmetric about the median following descending order in the central tendencies mean median and mode.
FAQs
Why chi square distribution is positively skewed
The chi-square distribution gives the values from zero to infinity and the curve of the distribution gathers the information in the right tail so it shows the right-skewed curve hence the chi-square distribution is a positively skewed distribution.
Is Poisson distribution positively skewed
Yes, Poisson distribution is a positively skewed distribution as the information scattered near the right tail so the nature of the plot is positively skewed
Why does negative binomial distribution always positively skew
The negative binomial distribution is always positively skewed because negative binomial distribution is the generalization of pascal distribution which is always positively skewed so is the negative binomial distribution.
Does skewness have any impact on linear regression models My dependent variable and my interaction variable is positively skewed
The impact on linear regression of the model having my dependent variable and my interaction skewed does not mean the regression error is also skewed and vice versa as the error is skewed does not mean the variables are skewed.
The Hermite polynomial is widely occurred in applications as an orthogonal function. Hermite polynomial is the series solution of Hermite differential equation.
Hermite’s Equation
The differential equation of second order with specific coefficients as
d2y/dx2 – 2x dy/dx + 2xy = 0
is known as Hermite’s equation, by solving this differential equation we will get the polynomial which is Hermite Polynomial.
Let us find the solution of the equation
d2y/dx2 – 2x dy/dx + 2ny = 0
with the help of series solution of differential equation
now substituting all these values in the Hermite’s equation we have
This equation satisfies for the value of k=0 and as we assumed the value of k will not be negative, now for the lowest degree term xm-2 take k=0 in the first equation as the second gives negative value, so the coefficient xm-2 is
a0m (m-1)=0 ⇒ m=0,m=1
as a0 ≠ 0
now in the same way equating the coefficient of xm-1 from the second summation
and equating the coefficients of xm+k to zero,
ak+2(m+k+2)(m+k+1)-2ak(m+k-n) = 0
we can write it as
ak+2 = 2(m+k-n)/(m+k+2)(m+k+1) ak
if m=0
ak+2 = 2(k-n)/(k+2)(k+1) ak
if m=1
ak+2 = 2(k+1-n)/(k+3)(k+2) ak
for these two cases now we discuss the cases for k
so far m=0 we have two conditions when a1=0, then a3=a5=a7=….=a2r+1=0 and when a1 is not zero then
by following this put the values of a0,a1,a2,a3,a4 and a5 we have
and for m=1 a1=0 by putting k=0,1,2,3,….. we get
ak+2 = 2(k+1-n)/(k+3)(k+2)ak
so the solution will be
so the complete solution is
where A and B are the arbitrary constants
Hermite Polynomial
The Hermite’s equation solution is of the form y(x)=Ay1(x)+By2(x) where y1(x) and y2(x) are the series terms as discussed above,
one of these series end if n is non negative integer if n is even y1 terminates otherwise y2 if n is odd, and we can easily verify that for n=0,1,2,3,4…….. these polynomials are
1,x,1-2x2, x-2/3 x3, 1-4x2+4/3x4, x-4/3x3+ 4/15x5
so we can say here that the solution of Hermite’s equation are constant multiple of these polynomials and the terms containing highest power of x is of the form 2nxn denoted by Hn(x) is known as Hermite polynomial
Generating function of Hermite polynomial
Hermite polynomial usually defined with the help of relation using generating function
[n/2] is the greatest integer less than or equal to n/2 so it follows the value of Hn(x) as
this shows that Hn(x) is a polynomial of degree n in x and
Hn(x) = 2nxn + πn-2 (x)
where πn-2 (x) is the polynomial of degree n-2 in x, and it will be even function of x for even value of n and odd function of x for odd value of n, so
Hn(-x) = (-1)n Hn(x)
some of the starting Hermite polynomials are
H0(x) = 1
H1(x) = 2x
H2(x) = 4x2 – 2
H3(x) = 8x3-12
H4(x) = 16x4 – 48x2+12
H5(x) = 32x2 – 160x3+120x
Generating function of Hermite polynomial by Rodrigue Formula
Hermite Polynomial can also be defined with the help of Rodrigue formula using generating function
since the relation of generating function
Using the Maclaurin’s theorem, we have
or
by putting z=x-t and
for t=0,so z=x gives
this we can show in another way as
differentiating
with respect to t gives
taking limit t tends to zero
now differentiating with respect to x
taking limit t tends to zero
from these two expressions we can write
in the same way we can write
differentiating n times put t=0, we get
from these values we can write
from these we can get the values
Example on Hermite Polynomial
Find the ordinary polynomial of
Solution: using the Hermite polynomial definition and the relations we have
2. Find the Hermite polynomial of the ordinary polynomial
Solution: The given equation we can convert to Hermite as
and from this equation equating the same powers coefficient
hence the Hermite polynomial will be
Orthogonality of Hermite Polynomial | Orthogonal property of Hermite Polynomial
The important characteristic for Hermite polynomial is its orthogonality which states that
To prove this orthogonality let us recall that
which is the generating function for the Hermite polynomial and we know
so multiplying these two equations we will get
multiplying and integrating within infinite limits
and since
so
using this value in above expression we have
which gives
now equate the coefficients on both the sides
which shows the orthogonal property of Hermite polynomial.
The result of orthogonal property of Hermite polynomial can be shown in another way by considering the recurrence relation
Example on orthogonality of Hermite Polynomial
1.Evaluate the integral
Solution: By using the property of orthogonality of hermite polynomial
since the values here are m=3 and n=2 so
2. Evaluate the integral
Solution: Using the orthogonality property of Hermite polynomial we can write
Recurrence relations of Hermite polynomial
The value of Hermite polynomial can be easily find out by the recurrence relations
These relations can easily obtained with the help of definition and properties.
Proofs:1. We know the Hermite equation
y”-2xy’+2ny = 0
and the relation
by taking differentiation with respect to x partially we can write it as
from these two equations
now replace n by n-1
by equating the coefficient of tn
so the required result is
2. In the similar way differentiating partially with respect to t the equation
we get
n=0 will be vanished so by putting this value of e
now equating the coefficients of tn
thus
3. To prove this result we will eliminate Hn-1 from
and
so we get
thus we can write the result
4. To prove this result we differentiate
we get the relation
substituting the value
and replacing n by n+1
which gives
Examples on Recurrence relations of Hermite polynomial
1.Show that
H2n(0) = (-1)n. 22n (1/2)n
Solution:
To show the result we have
H2n(x) =
taking x=0 here we get
2. Show that
H’2n+1(0) = (-1)n 22n+1 (3/2)2
Solution:
Since from the recurrence relation
H’n(x) = 2nHn-1(X)
here replace n by 2n+1 so
H’2n-1(x) = 2(2n+1) H2n(x)
taking x=0
3. Find the value of
H2n+1(0)
Solution
Since we know
use x=0 here
H2n-1(0) = 0
4. Find the value of H’2n(0).
Solution :
we have the recurrence relation
H’n(x) = 2nHn-1(x)
here replace n by 2n
H’2n(x) = =2(2n)H2n-1(x)
put x=0
H’2n(0) = (4n)H2n-1(0) = 4n*0=0
5. Show the following result
Solution :
Using the recurrence relation
H’n(x) = 2nHn-1 (x)
so
and
d3/dx3 {Hn(x)} = 23n(n-1)(n-2)Hn-3(x)
differentiating this m times
which gives
6. Show that
Hn(-x) = (-1)n Hn(x)
Solution :
we can write
from the coefficient of tn we have
and for -x
7. Evaluate the integral and show
Solution : For solving this integral use integration parts as
Now differentiation under the Integral sign differentiate with
respect to x
using
H’n(x) = 2nHn-1 (x)
and
H’m(x) = 2mHm-1 (x)
we have
and since
???? n,m-1 = ????n+1, m
so the value of integral will be
Conclusion:
The specific polynomial which frequently occurs in application is Hermite polynomial, so the basic definition, generating function , recurrence relations and examples related to Hermite Polynomial were discussed in brief here , if you require further reading go through
Locus is a Latin word. It is derived from the word ‘Place’ or ‘Location’. The Plural of locus is Loci.
Definition of Locus:
In Geometry, ‘Locus’ is a set of points which satisfy one or more specified conditions of a figure or shape. In modern mathematics, the location or the path on which a point moves on the plane satisfying given geometrical conditions, is called locus of the point.
Locus is defined for line, line segment and the regular or irregular curved shapes except the shapes having vertex or angles inside them in Geometry. https://en.wikipedia.org/wiki/Coordinate_system
Examples on Locus:
lines, circles, ellipse, parabola, hyperbola etc. all these geometrical shapes are defined by the locus of points.
Equation of the Locus:
The algebraic form of the geometrical properties or conditions which are satisfied by the coordinates of all the points on Locus, is known as the equation of the locus of those points.
Method of Obtaining the Equation of the Locus:
To find the equation of the locus of a moving point on a plane, follow the process described below
(i) First, assume the coordinates of a moving point on a plane be (h,k).
(ii) Second, derive a algebraic equation with h and k from the given geometrical conditions or properties.
(iii) Third, replace h and k by x and y respectively in the above said equation. Now this equation is called the the equation of the locus of the moving point on the plane. (x,y) is the current coordinates of the moving point and the equation of the locus must always be derived in the form of x and y i.e. current coordinates.
Here are some examples to make the conception clear about locus.
4+different types of solved problems on Locus:
Problem 1: If P be any point on the XY-plane which is equidistant from two given points A(3,2) and B(2,-1) on the same plane, then find the locus and the equation of locus of the point P with graph.
Solution:
Assume that the coordinates of any point on the locus of P on XY-plane are (h, k).
Since, P is equidistant from A and B, we can write
The distance of P from A=The distance of P from B
Or, |PA|=|PB|
Or, (h2 -6h+9+k2 -4k+4) = (h2 -4h+4+k2 +2k+1)——– taking square to both sides.
Or, h2 -6h+13+k2 -4k -h2+4h-5-k2 -2k = 0
Or, -2h -6k+8 = 0
Or, h+3k -4 = 0
Or, h+3k = 4 ——– (1)
This is a first degree equation of h and k.
Now if h and k are replaced by x and y then the equation (1) becomes the first degree equation of x and y in the form of x + 3y = 4 which represents a straight line.
Therefore, the locus of the point P(h, k) on XY-plane is a straight line and the equation of the locus is x + 3y = 4 . (Ans.)
Problem 2: If a point R moves on the XY-plane in such way that RA : RB = 3:2 where the coordinates of the points A and B are (-5,3) and (2,4) respectively on the same plane, then find the locus of the point R.
What type of curve does the equation of the locus of R indicate?
Solution: Lets assume that the coordinates of any point on the locus of given point R on XY-plane be (m, n).
Asper given condition RA : RB = 3:2,
we have,
(The distance of R from A) / (The distance of R from B) = 3/2
Or, (m2 +10m+34+n2 -6n) / (m2 -4m+n2 -8n+20) =9/4 ———– taking square to both sides.
Now if m and n are replaced by x and y, the equation (1) becomes the second degree equation of x and y in the form of 5(x2+y2)-76x+48y+44 = 0 where the coefficients of x2 and y2 are same and the coefficient of xy is zero. This equation represents a circle.
Therefore, the locus of the point R(m, n) on XY-plane is a circle and the equation of the locus is
5(x2+y2)-76x+48y+44 = 0 (Ans.)
Problem 3: For all values of (θ,aCosθ,bSinθ) are the coordinates a point P which moves on the XY plane. Find the equation of locus of P.
Solution: lets (h, k) be the coordinates of any point lying on the locus of P on XY-plane.
Then asper the question, we can say
h= a Cosθ
Or, h/a = Cosθ —————(1)
And k = b Sinθ
Or, k/b = Sinθ —————(2)
Now taking square of both the equations (1) and (2) and then adding, we have the equation
Therefore the equation of locus of the point P is x2/a2 + y2/b2 = 1 . (Ans.)
Problem 4 : Find the equation of locus of a point Q, moving on the XY-plane, if the coordinates of Q are
where u is the variable parameter.
Solution : Let the coordinates of any point on the locus of given point Q while moving on XY-plane be (h, k).
Then, h = and k =
i.e. h(3u+2) = 7u-2 and k(u-1) = 4u+5
i.e. (3h-7)u = -2h-2 and (k-4)u = 5+k
i.e. u = —————(1)
and u = —————(2)
Now equating the equations (1) and (2) , we get,
Or, (-2h-2)(k-4) = (3h-7)(5+k)
Or, -2hk+8h-2k+8 = 15h+3hk-35-7k
Or, -2hk+8h-2k-15h-3hk+7k = -35-8
Or, -5hk-7h+5k = -43
Or, 5hk+7h-5k = 43
Therefore, the equation of the locus of Q is 5xy+7x-5y = 43.
More examples on Locus with answers for practice by your own:
Problems 5: If θ be a variables and u be a constant, then find the equation of locus of the point of intersection of the two straight lines x Cosθ + y Sinθ = u and x Sinθ- y Cosθ = u. ( Ans. x2+y2 =2u2 )
Problems 6: Find the equation of locus of the middle point of the line segment of the straight line x Sinθ + y Cosθ = t between the axes. ( Ans. 1/x2+1/y2 =4/t2 )
Problems 7: If a point P is moving in such way on the XY-plane that the area of the triangle made by the point with two points (2,-1) and (3,4). ( Ans. 5x-y=11)
Basic Examples on the Formulae “Centroid of a Triangle” in 2D Coordinate Geometry
Centroid: The three medians of a triangle always intersect at a point, located in the interior area of the triangle and divides the median at the ratio 2:1 from any vertex to the midpoint of the opposite side. This point is called the centroid of the triangle.
Problems 1: Find the centroid of the triangle with vertices (-1,0), (0,4) and (5,0).
Solution: We already know,
If A(x1,y1) ,B(x2,y2) and C(x3,y3) be the vertices of a Triangle and G(x, y) be the centroidof the triangle, then Coordinates of G are
Therefore, the coordinates of the centroid of the given triangle is . (Ans)
More answered problems are given below for further practice using the procedure described in above problem 1 :-
Problems 2: Find the coordinates of the centroid of the triangle with vertices at the points (-3,-1), (-1,3)) and (1,1).
Ans. (-1,1)
Problems 3: What is the x-coordinate of the centroid of the triangle with vertices (5,2), (10,4) and (6,-1) ?
Ans. 7
Problems 4: Three vertices of a triangle are (5,9), (2,15) and (11,12).Find the centroid of this triangle.
Ans. (6,12)
Shifting of Origin / Translation of Axes- 2D Co-ordinate Geometry
Shifting of Origin means to shift the Origin to a new point keeping the orientation of the axes unchanged i.e the new axes remain parallel to the original axes in the same plane. By this translation of axes or shifting of origin process many problems on algebraic equation of a geometric shape are simplified and solved easily.
The formula of ” Shifting of Origin” or “Translation of Axes” are described below with graphical representation.
Formula:
If O be the origin ,P(x,y) be any point in the XY plane and O be shifted to another point O′(a,b) against which the coordinates of the point P become (x1,y1) in the same plane with new axes X1Y1 ,Then New Coordinates of P are
x1 = x- a
y1 = y- b
Graphical representation for clarification: Follow the graphs
Few solvedProblems on the formula of ‘Shifting of Origin’ :
Problem-1 : If there are two points (3,1) and (5,4) in the same plane and the origin is shifted to the point (3,1) keeping the new axes parallel to the original axes, then find the co-ordinates of the point (5,4) in respect with the new origin and axes.
Solution: Comparing with the formula of ‘Shifting of Origin’ described above , we have new Origin, O′(a, b) ≌ (3,1) i.e. a=3 , b=1 and the required point P, (x, y) ≌ (5,4) i.e. x=5 , y=4
Now if (x1,y1) be the new coordinates of the point P(5,4) ,then asper formula x1 = x-a and y1 =y-b,
we get, x1 = 5-3 and y1 =4-1
i.e. x1 = 2 and y1 =3
Therefore, the required new coordinates of the point (5,4) is (2,3) . (Ans.)
Problem-2 : After shifting the Origin to a point in the same plane ,remaining the axes parallel to each other ,the coordinates of a point (5,-4) become (4,-5).Find the Coordinates of new Origin.
Solution: Here using the formula of ‘Shifting the Origin’ or ‘Translation of Axes’ , we can say the coordinates of the point P with respect to old and new Origin and axes respectively are (x, y) ≌ (5,-4) i.e. x=5 , y= -4 and (x1,y1) ≌ (4,-5) i.e. x1= 4, y1= -5
Now we have to find the coordinates of the new Origin O′(a, b) i.e. a=?, b=?
Asper formula,
x1 = x- a
y1 = y- b
i.e.a=x-x1 and b=y-y1
Or, a=5-4 and b= -4-(-5)
Or, a=1 and b= -4+5
Or, a=1 and b= 1
Therefore, O'(1,1) be the new Origin i.e. the coordinates of the new Origin are (1,1). (Ans.)
Basic Examples on the Formulae “Collinearity of points (three points)” in 2D Coordinate Geometry
Problems 1:Check whether the points (1,0), (0,0) and (-1,0) are collinear or not.
Solution: We already know,
If A(x1,y1) ,B(x2,y2) and C(x3,y3) be any three collinear points, then the area of the triangle made by them must be zero i.e the area of the triangle is½[x1 (y2– y3) + x2 (y3– y1) + x3 (y1-y2)] =0
So, the area of the triangle is = |½[x1 (y2– y3) + x2 (y3– y1) + x3 (y1-y2)]|i.e.
(L.H.S) = |½[-1 (0-0) + 0 (0-0) + 1 (0-0)]|
= |½[(- 1)x0 + 0x0 + 1×0]|
= |½[0 + 0 + 0]|
= |½ x 0|
= 0 (R.H.S)
Therefore, the area of the triangle made by those given points become zero which means they are lying on the same line.
Therefore, the given points are collinear points. (Ans)
More answered problems are given below for further practice using the procedure described in the aboveproblem 1 :-
Problems 2: Check whether the points (-1,-1), (0,0) and (1,1) are collinear or not.
Ans. Yes
Problems 3: Is it possible to draw one line through three points (-3,2), (5,-3) and (2,2) ?
Ans.No
Problems 4: Check whether the points (1,2), (3,2) and (-5,2),connected by lines, can form a triangle in the coordinate plane.
Ans. No
______________________________
Basic Examples on the Formulae “Incenter of a Triangle”in 2D Coordinate Geometry
Incenter:It is the center of the triangle’s largest incircle which fits inside the triangle.It is also the point of intersection of the three bisectors of the interior angles of the triangle.
Problems 1: The vertices of a triangle with sides are (-2,0), (0,5) and (6,0) respectively. Find the incenter of the triangle.
Solution: We already know,
If A(x1,y1) ,B(x2,y2) and C(x3,y3) be the vertices, BC=a, CA=b and AB=c , G′(x,y) be the incentre of the triangle,
Basic Examples on the Formulae “Point sections or Ratio”
Case-I
Problems 21: Find the coordinates of the point P(x, y) which internally divides the line segment joining the two points (1,1) and (4,1) in the ratio 1:2.
Solution: We already know,
If a point P(x, y) divides the line segment AB internally in the ratio m:n,where coordinates of A and B are (x1,y1) and (x2,y2) respectively. Then Coordinates of P are
and
(See formulae chart)
Using this formula we can say , (x1,y1) ≌(1,1) i.e. x1=1, y1=1 ;
(x2,y2)≌(4,1) i.e. x2=4, y2=1
and
m:n ≌ 1:2 i.e m=1,n=2
Therefore,
x =
( putting values of m & n in
Or, x =1*4+2*1/3 ( putting values of x1 & x2 too )
Or, x = 4+2/3
Or, x = 6*3
Or,x = 2
Similarly we get,
y=
( putting values of m & n in y =
Or, y =(1*1+2*1)/3 ( putting values of y1 & y2 too )
Or, y = 1*1+2/3
Or, y = 3/3
Or, y = 1
Therefore, x=2 and y=1 are the coordinates of the point P i.e. (2,1). (Ans)
More answered problems are given below for further practice using the procedure described in above problem 21:-
Problem 22: Find the coordinates of the point which internally divides the line segment joining the two points (0,5) and (0,0) in the ratio 2:3.
Ans. (0,2)
Problem 23: Find the point which internally divides the line segment joining the points (1,1) and (4,1) in the ratio 2:1.
Ans. (3,1)
Problem 24: Find the point which lies on the line segment joining the two points (3,5,) and (3,-5,) dividing it in the ratio 1:1
Ans. (3,0)
Problem 25: Find the coordinates of the point which internally divides the line segment joining the two points (-4,1) and (4,1) in the ratio 3:5
Ans. (-1,1)
Problem 26: Find the point which internally divides the line segment joining the two points (-10,2) and (10,2) in the ratio 1.5: 2.5.
_____________________________
Case-II
Problems 27: Find the coordinates of the point Q(x,y) which externally divides the line segment joining the two points (2,1) and (6,1) in the ratio 3:1.
Solution: We already know,
If a point Q(x,y) divides the line segment AB externally in the ratio m:n,where coordinates of A and B are (x1,y1) and (x2,y2) respectively,then the coordinates of the point P are
and
(See formulae chart)
Using this formula we can say , (x1,y1) ≌(2,1) i.e. x1=2, y1=1 ;
(x2,y2)≌(6,1) i.e. x2=6, y2=1 and
m:n ≌ 3:1 i.e. m=3,n=1
Therefore,
x=
( putting values of m & n in x =
Or, x =(3*6)-(1*2)/2 ( putting values of x1 & x2 too )
Or, x = 18-2/2
Or, x =16/2
Or, x = 8
Similarly we get,
y=
( putting values of m & n in y =
Or, y =
( putting values of y1 & y2 too )
Or, y = 3-1/2
Or, y = 2/2
Or, y = 1
Therefore, x=8 and y=1 are the coordinates of the point Q i.e. (8,1). (Ans)
More answered problems are given below for further practice using the procedure described in above problem 27:-
Problem 28: Find the point which externally divides the line segment joining the two points (2,2) and (4,2) in the ratio 3: 1.
Ans. (5,2)
Problem 29: Find the point which externally divides the line segment joining the two points (0,2) and (0,5) in the ratio 5:2.
Ans. (0,7)
Problem 30: Find the point which lies on the extended part of the line segment joining the two points (-3,-2) and (3,-2) in the ratio 2: 1.
Ans. (9,-2)
________________________________
Case-III
Problems 31: Find the coordinates of the midpoint of the line segment joining the two points (-1,2) and (1,2).
Solution: We already know,
If a point R(x,y) be the midpoint of the line segment joining A(x1,y1) and B(x2,y2) .Then coordinates of R are
and
(See formulae chart)
Case-III is the form of case-I while m=1 and n=1
Using this formula we can say , (x1,y1) ≌(-1,2) i.e. x1=-1, y1=2 and
(x2,y2)≌(1,2) i.e. x2=1, y2=2
Therefore,
x=
( putting values of x1 & x2 in x=
Or, x = 0/2
Or, x = 0
Similarly we get,
y=2+2/2 ( putting values of y1 & y2 in y=
Or, y= 4/2
Or, y= 2
Therefore, x=0 and y=2 are the coordinates of the midpoint R i.e. (0,2). (Ans)
More answered problems are given below for further practice using the procedure described in above problem 31:-
Problem 32: Find the coordinates of the midpoint of the line joining the two points (-1,-3) and (1,-4).
Ans. (0,3.5)
Problem 33: Find the coordinates of the midpoint which divides the line segment joining the two points (-5,-7) and (5,7).
Ans. (0,0)
Problem 34: Find the coordinates of the midpoint which divides the line segment joining the two points (10,-5) and (-7,2).
Ans. (1.5, -1.5)
Problem 35: Find the coordinates of the midpoint which divides the line segment joining the two points (3,√2) and (1,3√2).
Ans. (2,2√2)
Problem 36: Find the coordinates of the midpoint which divides the line segment joining the two points (2+3i,5) and (2-3i,-5).
Ans. (2,0)
Note: How to check if a point divides a line (length=d units) internally or externally by the ratio m:n
If ( m×d)/(m+n) + ( n×d)/(m+n) = d , then internally dividingand
If ( m×d)/(m+n) – ( n×d)/(m+n) = d , then externally dividing
COVARIANCE, VARIANCE OF SUMS, AND CORRELATIONS OF RANDOM VARIABLES
The statistical parameters of the random variables of different nature using the definition of expectation of random variable is easy to obtain and understand, in the following we will find some parameters with the help of mathematical expectation of random variable.
Moments of the number of events that occur
So far we know that expectation of different powers of random variable is the moments of random variables and how to find the expectation of random variable from the events if number of event occurred already, now we are interested in the expectation if pair of number of events already occurred, now if X represents the number of event occurred then for the events A1, A2, ….,An define the indicator variable Ii as
the expectation of X in discrete sense will be
because the random variable X is
now to find expectation if number of pair of event occurred already we have to use combination as
this gives expectation as
from this we get the expectation of x square and the value of variance also by
By using this discussion we focus different kinds of random variable to find such moments.
Moments of binomial random variables
If p is the probability of success from n independent trials then lets denote Ai for the trial i as success so
this expectation we can obtain successively for the value of k greater than 3 let us find for 3
using this iteration we can get
Moments of hypergeometric random variables
The moments of this random variable we will understand with the help of an example suppose n pens are randomly selected from a box containing N pens of which m are blue, Let Ai denote the events that i-th pen is blue, Now X is the number of blue pen selected is equal to the number of events A1,A2,…..,An that occur because the ith pen selected is equally likely to any of the N pens of which m are blue
and so
this gives
so the variance of hypergeometric random variable will be
in similar way for the higher moments
hence
Moments of the negative hypergeometric random variables
consider the example of a package containing n+m vaccines of which n are special and m are ordinary, these vaccines removed one at a time, with each new removal equally likely to be any of the vaccine that remain in the package. Now let random variable Y denote the number of vaccines that need to be withdrawn until a total of r special vaccines have been removed, which is negative hypergeometric distribution, this is somehow similar with negative binomial to binomial as to hypergeometric distribution. to find the probability mass function if the kth draw gives the special vaccine after k-1 draw gives r-1 special and k-r ordinary vaccine
now the random variable Y
Y=r+X
for the events Ai
as
hence to find the variance of Y we must know the variance of X so
hence
COVARIANCE
The relationship between two random variable can be represented by the statistical parameter covariance, before the definition of covariance of two random variable X and Y recall that the expectation of two functions g and h of random variables X and Y respectively gives
using this relation of expectation we can define covariance as
“ The covariance between random variable X and random variable Y denoted by cov(X,Y) is defined as
using definition of expectation and expanding we get
it is clear that if the random variables X and Y are independent then
but the converse is not true for example if
and defining the random variable Y as
so
here clearly X and Y are not independent but covariance is zero.
Properties of covariance
Covariance between random variables X and Y has some properties as follows
using the definition off the covariance the first three properties are immediate and the fourth property follows by considering
now by definition
Variance of the sums
The important result from these properties is
as
If Xi ‘s are pairwise independent then
Example: Variance of a binomial random variable
If X is the random variable
where Xi are the independent Bernoulli random variables such that
then find the variance of a binomial random variable X with parameters n and p.
Solution:
since
so for single variable we have
so the variance is
Example
For the independent random variables Xi with the respective means and variance and a new random variable with deviation as
then compute
solution:
By using the above property and definition we have
now for the random variable S
take the expectation
Example:
Find the covariance of indicator functions for the events A and B.
Solution:
for the events A and B the indicator functions are
so the expectation of these are
thus the covariance is
Example:
Show that
where Xi are independent random variables with variance.
Solution:
The covariance using the properties and definition will be
Example:
Calculate the mean and variance of random variable S which is the sum of n sampled values if set of N people each of whom has an opinion about a certain subject that is measured by a real number v that represents the person’s “strength of feeling” about the subject. Let represent the strength of feeling of person which is unknown, to collect information a sample of n from N is taken randomly, these n people are questioned and their feeling is obtained to calculate vi
Solution
let us define the indicator function as
thus we can express S as
and its expectation as
this gives the variance as
since
we have
we know the identity
so
so the mean and variance for the said random variable will be
Conclusion:
The correlation between two random variables is defined as covariance and using the covariance the sum of the variance is obtained for different random variables, the covariance and different moments with the help of definition of expectation is obtained , if you require further reading go through
The mathematical expectation plays very important role in the probability theory, the basic definition and basic properties of mathematical expectation already we discussed in previous some articles now after discussing the various distributions and types of distributions, in the following article we will get familiar with some more advanced properties of mathematical expectation.
Expectation of sum of random variables | Expectation of function of random variables | Expectation of Joint probability distribution
We know the mathematical expectation of random variable of discrete nature is
and for the continuous one is
now for the random variable X and Y if discrete then with the joint probability mass function p(x,y)
expectation of function of random variable X and Y will be
and if continuous then with the joint probability density function f(x, y) the expectation of function of random variable X and Y will be
if g is addition of these two random variables in continuous form the
and if for the random variables X and Y we have
X>Y
then the expectation also
Example
A Covid-19 hospital is uniformly distributed on the road of the length L at a point X, a vehicle carrying oxygen for the patients is at a location Y which is also uniformly distributed on the road, Find the expected distance between Covid-19 hospital and oxygen carrying vehicle if they are independent.
Solution:
To find the expected distance between X and Y we have to calculate E { | X-Y | }
Now the joint density function of X and Y will be
since
by following this we have
now the value of integral will be
Thus the expected distance between these two points will be
Expectation of Sample mean
As the sample mean of the sequence of random variables X1, X2, ………, Xn with distribution function F and expected value of each as μ is
so the expectation of this sample mean will be
which shows the expected value of sample mean is also μ.
here Ai ‘s are the random events, this means random variable X represents the occurrence of the number of events Ai and another random variable Y as
clearly
X>=Y
E[X] >= E[Y]
and so is
now if we take the value of random variable X and Y these expectation will be
and
substituting these expectation in the above inequality we will get Boole’s inequality as
Expectation of Binomial random variable | Mean of Binomial random variable
We know that the binomial random variable is the random variable which shows number of successes in n independent trials with probability of success as p and failure as q=1-p, so if
X=X1 + X2+ …….+ Xn
Where
here these Xi ‘s are the Bernoulli and the expectation will be
so the expectation of X will be
Expectation of Negative binomial random variable | Mean of Negative binomial random variable
Let a random variable X which represents the number of trials needed to collect r successes, then such a random variable is known as negative binomial random variable and it can be expressed as
here each Xi denote the number of trials required after the (i-1)st success to obtain the total of i successes.
Since each of these Xi represent the geometric random variable and we know the expectation for the geometric random variable is
so
which is the expectation of negative binomial random variable.
Expectation of hypergeometric random variable | Mean of hypergeometric random variable
The expectation or mean of the hypergeometric random variable we will obtain with the help of a simple real life example, if n number of books are randomly selected from a shelf containing N books of which m are of mathematics, then to find the expected number of mathematics books let X denote the number of mathematics books selected then we can write X as
where
so
=n/N
which gives
which is the mean of such a hypergeometric random variable.
Expected number of matches
This is very popular problem related to expectation, suppose that in a room there are N number of people who throw their hats in the middle of the room and all the hats are mixed after that each person randomly choose one hat then the expected number of people who select their own hat we can obtain by letting X to be the number of matches so
Where
since each person has equal opportunity to select any of the hat from N hats then
so
which means exactly one person on average choose his own hat.
The probability of a union of events
Let us obtain the probability of the union of the events with the help of expectation so for the events Ai
with this we take
so the expectation of this will be
and expanding using expectation property as
since we have
and
so
this implies the probability of union as
Bounds from Expectation using Probabilistic method
Suppose S be a finite set and f is the function on the elements of S and
here we can obtain the lower bound for this m by expectation of f(s) where “s” is any random element of S whose expectation we can calculate so
here we get expectation as the lower bound for the maximum value
Maximum-Minimum identity
Maximum Minimum identity is the maximum of the set of numbers to the minimums of the subsets of these numbers that is for any numbers xi
To show this let us restrict the xi within the interval [0,1], suppose a uniform random variable U on the interval (0,1) and the events Ai as the uniform variable U is less than xi that is
since at least one of the above event occur as U is less than one the value of xi
and
Clearly we know
and all the events will occur if U is less than all the variables and
the probability gives
we have the result of probability of union as
following this inclusion exclusion formula for the probability
consider
this gives
since
which means
hence we can write it as
taking expectation we can find expected values of maximum and partial minimums as
Conclusion:
The Expectation in terms of various distribution and correlation of expectation with some of the probability theory concepts were the focus of this article which shows the use of expectation as a tool to get expected values of different kind of random variables, if you require further reading go through below books.
It is very interesting to discuss the conditional case of distribution when two random variables follows the distribution satisfying one given another, we first briefly see the conditional distribution in both the case of random variables, discrete and continuous then after studying some prerequisites we focus on the conditional expectations.
Discrete conditional distribution
With the help of joint probability mass function in joint distribution we define conditional distribution for the discrete random variables X and Y using conditional probability for X given Y as the distribution with the probability mass function
provided the denominator probability is greater than zero, in similar we can write this as
in the joint probability if the X and Y are independent random variables then this will turn into
so the discrete conditional distribution or conditional distribution for the discrete random variables X given Y is the random variable with the above probability mass function in similar way for Y given X we can define.
so using the definition of probability mass function
we have
and
obtain the conditional distribution of X given X+Y=n, where X and Y are Poisson distributions with the parameters λ1 and λ2 and X and Y are independent random variables
Since the random variables X and Y are independent, so the conditional distribution will have probability mass function as
since the sum of Poisson random variable is again poisson so
thus the conditional distribution with above probability mass function will be conditional distribution for such Poisson distributions. The above case can be generalize for more than two random variables.
Continuous conditional distribution
The Continuous conditional distribution of the random variable X given y already defined is the continuous distribution with the probability density function
denominator density is greater than zero, which for the continuous density function is
thus the probability for such conditional density function is
In similar way as in discrete if X and Y are independent in continuous then also
and hence
so we can write it as
Example on Continuous conditional distribution
Calculate conditional density function of random variable X given Y if the joint probability density function with the open interval (0,1) is given by
If for the random variable X given Y within (0,1) then by using the above density function we have
Calculate the conditional probability
if the joint probability density function is given by
To find the conditional probability first we require the conditional density function so by the definition it would be
Conditional distribution of bivariate normal distribution
We know that the Bivariate normal distribution of the normal random variables X and Y with the respective means and variances as the parameters has the joint probability density function
so to find the conditional distribution for such a bivariate normal distribution for X given Y is defined by following the conditional density function of the continuous random variable and the above joint density function we have
By observing this we can say that this is normally distributed with the mean
and variance
in the similar way the conditional density function for Y given X already defined will be just interchanging the positions of the parameters of X with Y,
The marginal density function for X we can obtain from the above conditional density function by using the value of the constant
let us substitute in the integral
the density function will be now
since the total value of
by the definition of the probability so the density function will be now
which is nothing but the density function of random variable X with usual mean and variance as the parameters.
Joint Probability distribution of function of random variables
So far we know the joint probability distribution of two random variables, now if we have functions of such random variables then what would be the joint probability distribution of those functions, how to calculate the density and distribution function because we have real life situations where we have functions of the random variables,
If Y1 and Y2 are the functions of the random variables X1 and X2 respectively which are jointly continuous then the joint continuous density function of these two functions will be
where Jacobian
and Y1 =g1 (X1, X2) and Y2 =g2 (X1, X2) for some functions g1 and g2 . Here g1 and g2 satisfies the conditions of the Jacobian as continuous and have continuous partial derivatives.
Now the probability for such functions of random variables will be
Examples on Joint Probability distribution of function of random variables
Find the joint density function of the random variables Y1 =X1 +X2 and Y2=X1 -X2 , where X1 and X2 are the jointly continuous with joint probability density function. also discuss for the different nature of distribution .
Here we first we will check Jacobian
since g1(x1, x2)= x1 + x2 and g2(x1, x2)= x1 – x2 so
simplifying Y1 =X1 +X2 and Y2=X1 -X2 , for the value of X1 =1/2( Y1 +Y2 ) and X2 = Y1 -Y2 ,
if these random variables are independent uniform random variables
or if these random variables are independent exponential random variables with usual parameters
or if these random variables are independent normal random variables then
If X and Y are the independent standard normal variables as given
calculate the joint distribution for the respective polar coordinates.
We will convert by usual conversion X and Y into r and θ as
so the partial derivatives of these function will be
so the Jacobian using this functions is
if both the random variables X and Y are greater than zero then conditional joint density function is
now the conversion of cartesian coordinate to the polar coordinate using
so the probability density function for the positive values will be
for the different combinations of X and Y the density functions in similar ways are
now from the average of the above densities we can state the density function as
and the marginal density function from this joint density of polar coordinates over the interval (0, 2π)
Find the joint density function for the function of random variables
U=X+Y and V=X/(X+Y)
where X and Y are the gamma distribution with parameters (α + λ) and (β +λ) respectively.
Using the definition of gamma distribution and joint distribution function the density function for the random variable X and Y will be
consider the given functions as
g1 (x,y) =x+y , g2 (x,y) =x/(x+y),
so the differentiation of these function is
now the Jacobian is
after simplifying the given equations the variables x=uv and y=u(1-v) the probability density function is
we can use the relation
Calculate the joint probability density function for
for the normal variable the joint probability density function is
hence
where the index is
compute the joint density function of Y1 ……Yn and marginal density function for Yn where
and Xi are independent identically distributed exponential random variables with parameter λ.
for the random variables of the form
Y1 =X1 , Y2 =X1 + X2 , ……, Yn =X1 + ……+ Xn
the Jacobian will be of the form
and hence its value is one, and the joint density function for the exponential random variable
and the values of the variable Xi ‘s will be
so the joint density function is
Now to find the marginal density function of Yn we will integrate one by one as
and
like wise
if we continue this process we will get
which is the marginal density function.
Conclusion:
The conditional distribution for the discrete and continuous random variable with different examples considering some of the types of these random variables discussed, where the independent random variable plays important role. In addition the joint distribution for the function of joint continuous random variables also explained with suitable examples, if you require further reading go through below links.
For more post on Mathematics, please refer to our Mathematics Page
The jointly distributed random variables are the random variable more than one with probability jointly distributed for these random variables, in other words in experiments where the different outcome with their common probability is known as jointly distributed random variable or joint distribution, such type of situation occurs frequently while dealing the problems of the chances.
Joint distribution function | Joint Cumulative probability distribution function | joint probability mass function | joint probability density function
For the random variables X and Y the distribution function or joint cumulative distribution function is
where the nature of the joint probability depends on the nature of random variables X and Y either discrete or continuous, and the individual distribution functions for X and Y can be obtained using this joint cumulative distribution function as
similarly for Y as
these individual distribution functions of X and Y are known as Marginal distribution functions when joint distribution is under consideration. These distributions are very helpful for getting the probabilities like
and in addition the joint probability mass function for the random variables X and Y is defined as
the individual probability mass or density functions for X and Y can be obtained with the help of such joint probability mass or density function like in terms of discrete random variables as
where C is any two dimensional plane, and the joint distribution function for continuous random variable will be
the probability density function from this distribution function can be obtained by differentiating
and the marginal probability from the joint probability density function
as
and
with respect to the random variables X and Y respectively
Examples on Joint distribution
The joint probabilities for the random variables X and Y representing the number of mathematics and statistics books from a set of books which contains 3 mathematics, 4 statistics and 5 physics books if 3 books taken randomly
Find the joint probability mass function for the sample of families having 15% no child, 20% 1 child, 35% 2 child and 30% 3 child if the family we choose randomly from this sample for child to be Boy or Girl?
The joint probability we will find by using the definition as
and this we can illustrate in the tabular form as follows
Calculate the probabilities
if for the random variables X and Y the joint probability density function is given by
with the help of definition of joint probability for continuous random variable
and the given joint density function the first probability for the given range will be
in the similar way the probability
and finally
Find the joint density function for the quotient X/Y of random variables X and Y if their joint probability density function is
To find the probability density function for the function X/Y we first find the joint distribution function then we will differentiate the obtained result,
so by the definition of joint distribution function and given probability density function we have
thus by differentiating this distribution function with respect to a we will get the density function as
where a is within zero to infinity.
Independent random variables and joint distribution
In the joint distribution the probability for two random variable X and Y is said to be independent if
where A and B are the real sets. As already in terms of events we know that the independent random variables are the random variables whose events are independent.
Thus for any values of a and b
and the joint distribution or cumulative distribution function for the independent random variables X and Y will be
if we consider the discrete random variables X and Y then
since
similarly for the continuous random variable also
Example of independent joint distribution
If for a specific day in a hospital the patients entered are poisson distributed with parameter λ and probability of male patient as p and probability of female patient as (1-p) then show that the number of male patients and female patients entered in the hospital are independent poisson random variables with parameters λp and λ(1-p) ?
consider the number of male and female patients by random variable X and Y then
as X+Y are the total number of patients entered in the hospital which is poisson distributed so
as the probability of male patient is p and female patient is (1-p) so exactly from total fix number are male or female shows binomial probability as
using these two values we will get the above joint probability as
thus probability of male and female patients will be
and
which shows both of them are poisson random variables with the parameters λp and λ(1-p).
2. find the probability that a person has to wait for more than ten minutes at the meeting for a client as if each client and that person arrives between 12 to 1 pm following uniform distribution.
consider the random variables X and Y to denote the time for that person and client between 12 to 1 so the probability jointly for X and Y will be
calculate
where X,Y and Z are uniform random variable over the interval (0,1).
here the probability will be
for the uniform distribution the density function
for the given range so
SUMS OF INDEPENDENT RANDOM VARIABLES BY JOINT DISTRIBUTION
The sum of independent variables X and Y with the probability density functions as continuous random variables, the cumulative distribution function will be
by differentiating this cumulative distribution function for the probability density function of these independent sums are
by following these two results we will see some continuous random variables and their sum as independent variables
sum of independent uniform random variables
for the random variables X and Y uniformly distributed over the interval (0,1) the probability density function for both of these independent variable is
so for the sum X+Y we have
for any value a lies between zero and one
if we restrict a in between one and two it will be
this gives the triangular shape density function
if we generalize for the n independent uniform random variables 1 to n then their distribution function
by mathematical induction will be
sum of independent Gamma random variables
If we have two independent gamma random variables with their usual density function
then following the density for the sum of independent gamma random variables
this shows the density function for the sum of gamma random variables which are independent
sum of independent exponential random variables
In the similar way as gamma random variable the sum of independent exponential random variables we can obtain density function and distribution function by just specifically assigning values of gamma random variables.
Sum of independent normal random variable | sum of independent Normal distribution
If we have n number of independent normal random variables Xi , i=1,2,3,4….n with respective means μi and variances σ2i then their sum is also normal random variable with the mean as Σμi and variances Σσ2i
We first show the normally distributed independent sum for two normal random variable X with the parameters 0 and σ2 and Y with the parameters 0 and 1, let us find the probability density function for the sum X+Y with
in the joint distribution density function
with the help of definition of density function of normal distribution
thus the density function will be
which is nothing but the density function of a normal distribution with mean 0 and variance (1+σ2) following the same argument we can say
with usual mean and variances. If we take the expansion and observe the sum is normally distributed with the mean as the sum of the respective means and variance as the sum of the respective variances,
thus in the same way the nth sum will be the normally distributed random variable with the mean as Σμi and variances Σσ2i
Sums of independent Poisson random variables
If we have two independent Poisson random variables X and Y with parameters λ1 and λ2 then their sum X+Y is also Poisson random variable or Poisson distributed
since X and Y are Poisson distributed and we can write their sum as the union of disjoint events so
by using the of probability of independent random variables
so we get the sum X+Y is also Poisson distributed with the mean λ1 +λ2
Sums of independent binomial random variables
If we have two independent binomial random variables X and Y with parameters (n,p) and (m, p) then their sum X+Y is also binomial random variable or Binomial distributed with parameter (n+m, p)
let use the probability of the sum with definition of binomial as
which gives
so the sum X+Y is also binomially distributed with parameter (n+m, p).
Conclusion:
The concept of jointly distributed random variables which gives the distribution comparatively for more than one variable in the situation is discussed in addition the basic concept of independent random variable with the help of joint distribution and sum of independent variables with some example of distribution is given with their parameters, if you require further reading go through mentioned books. For more post on mathematics, please click here.
Special form of Gamma distributions and relationships of Gamma distribution
In this article we will discuss the special forms of gamma distributions and the relationships of gamma distribution with different continuous and discrete random variables also some estimation methods in sampling of population using gamma distribution is briefly discuss.
Gamma distribution exponential family
The gamma distribution exponential family and it is two parameter exponential family which is largely and applicable family of distribution as most of real life problems can be modelled in the gamma distribution exponential family and the quick and useful calculation within the exponential family can be done easily, in the two parameter if we take probability density function as
if we restrict the known value of α (alpha) this two parameter family will reduce to one parameter exponential family
and for λ (lambda)
Relationship between gamma and normal distribution
In the probability density function of gamma distribution if we take alpha nearer to 50 we will get the nature of density function as
even the shape parameter in gamma distribution we are increasing which is resulting in similarity of normal distribution normal curve, if we tend shape parameter alpha tends to infinity the gamma distribution will be more symmetric and normal but as alpha tends to infinity value of x in gamma distribution will tends to minus infinity which result in semi infinite support of gamma distribution infinite hence even gamma distribution becomes symmetric but not same with normal distribution.
poisson gamma distribution | poisson gamma distribution negative binomial
The poisson gamma distribution and binomial distribution are the discrete random variable whose random variable deals with the discrete values specifically success and failure in the form of Bernoulli trials which gives random success or failure as a result only, now the mixture of Poisson and gamma distribution also known as negative binomial distribution is the outcome of the repeated trial of Bernoulli’s trial, this can be parameterize in different way as if r-th success occurs in number of trials then it can be parameterize as
and if the number of failures before the r-th success then it can be parameterize as
and considering the values of r and p
the general form of the parameterization for the negative binomial or poisson gamma distribution is
and alternative one is
this binomial distribution is known as negative because of the coefficient
and this negative binomial or poisson gamma distribution is well define as the total probability we will get as one for this distribution
The mean and variance for this negative binomial or poisson gamma distribution is
the poisson and gamma relation we can get by the following calculation
Thus negative binomial is the mixture of poisson and gamma distribution and this distribution is used in day to day problems modelling where discrete and continuous mixture we require.
Weibull gamma distribution
There are generalization of exponential distribution which involve Weibull as well as gamma distribution as the Weibull distribution has the probability density function as
and cumulative distribution function as
where as pdf and cdf of gamma distribution is already we discussed above the main connection between Weibull and gamma distribution is both are generalization of exponential distribution the difference between them is when power of variable is greater than one then Weibull distribution gives quick result while for less than 1 gamma gives quick result.
We will not discuss here generalized Weibull gamma distribution that require separate discussion.
application of gamma distribution in real life | gamma distribution uses | application of gamma distribution in statistics
There are number of application where gamma distribution is used to model the situation such as insurance claim to aggregate, rainfall amount accumulation, for any product its manufacturing and distribution, the crowd on specific web, and in telecom exchange etc. actually the gamma distribution give the wait time prediction till next event for nth event. There are number of application of gamma distribution in real life.
beta gamma distribution | relationship between gamma and beta distribution
The beta distribution is the random variable with the probability density function
where
which has the relationship with gamma function as
and beta distribution related to gamma distribution as if X be gamma distribution with parameter alpha and beta as one and Y be the gamma distribution with parameter alpha as one and beta then the random variable X/(X+Y) is beta distribution.
or If X is Gamma(α,1) and Y is Gamma (1, β) then the random variable X/(X+Y) is Beta (α, β)
and also
bivariate gamma distribution
A two dimensional or bivariate random variable is continuous if there exists a function f(x,y) such that the joint distribution function
where
and the joint probability density function obtained by
there are number of bivariate gamma distribution one of them is the bivariate gamma distribution with probability density function as
double gamma distribution
Double gamma distribution is one of the bivariate distribution with gamma random variables having parameter alpha and one with joint probability density function as
this density forms the double gamma distribution with respective random variables and the moment generating function for double gamma distribution is
relation between gamma and exponential distribution | exponential and gamma distribution | gamma exponential distribution
since the exponential distribution is the distribution with the probability density function
and the gamma distribution has the probability density function
clearly the value of alpha if we put as one we will get the exponential distribution, that is the gamma distribution is nothing but the generalization of the exponential distribution, which predict the wait time till the occurrence of next nth event while exponential distribution predict the wait time till the occurrence of the next event.
fit gamma distribution
As far as fitting the given data in the form of gamma distribution imply finding the two parameter probability density function which involve shape, location and scale parameters so finding these parameters with different application and calculating the mean, variance, standard deviation and moment generating function is the fitting of gamma distribution, since different real life problems will be modelled in gamma distribution so the information as per situation must be fit in gamma distribution for this purpose various technique in various environment is already there e.g in R, Matlab, excel etc.
shifted gamma distribution
There are as per application and need whenever the requirement of shifting the distribution required from two parameter gamma distribution the new generalized three parameter or any another generalized gamma distribution shift the shape location and scale , such gamma distribution is known as shifted gamma distribution
truncated gamma distribution
If we restrict the range or domain of the gamma distribution for the shape scale and location parameters the restricted gamma distribution is known as truncated gamma distribution based on the conditions.
survival function of gamma distribution
The survival function for the gamma distribution is defined the function s(x) as follows
mle of gamma distribution | maximum likelihood gamma distribution | likelihood function of gamma distribution
we know that the maximum likelihood take the sample from the population as a representative and this sample consider as an estimator for the probability density function to maximize for the parameters of density function, before going to gamma distribution recall some basics as for the random variable X the probability density function with theta as parameter has likelihood function as
this we can express as
and method of maximizing this likelihood function can be
if such theta satisfy this equation, and as log is monotone function we can write in terms of log
and such a supremum exists if
now we apply the maximum likelihood for the gamma distribution function as
the log likelihood of the function will be
so is
and hence
This can be achieved also as
by
and the parameter can be obtained by differentiating
gamma distribution parameter estimation method of moments | method of moments estimator gamma distribution
We can calculate the moments of the population and sample with the help of expectation of nth order respectively, the method of moment equates these moments of distribution and sample to estimate the parameters, suppose we have sample of gamma random variable with the probability density function as
we know the first tow moments for this probability density function is
so
we will get from the second moment if we substitute lambda
and from this value of alpha is
and now lambda will be
and moment estimator using sample will be
confidence interval for gamma distribution
confidence interval for gamma distribution is the way to estimate the information and its uncertainty which tells the interval is expected to have the true value of the parameter at what percent, this confidence interval is obtained from the observations of random variables, since it is obtained from random it itself is random to get the confidence interval for the gamma distribution there are different techniques in different application that we have to follow.
gamma distribution conjugate prior for exponential distribution | gamma prior distribution | posterior distribution poisson gamma
The posterior and prior distribution are the terminologies of Bayesian probability theory and they are conjugate to each other, any two distributions are conjugate if the posterior of one distribution is another distribution, in terms of theta let us show that gamma distribution is conjugate prior to the exponential distribution
if the probability density function of gamma distribution in terms of theta is as
assume the distribution function for theta is exponential from given data
so the joint distribution will be
and using the relation
we have
which is
so gamma distribution is conjugate prior to exponential distribution as posterior is gamma distribution.
gamma distribution quantile function
Qauntile function of gamma distribution will be the function that gives the points in gamma distribution which relate the rank order of the values in gamma distribution, this require cumulative distribution function and for different language different algorithm and functions for the quantile of gamma distribution.
generalized gamma distribution
As gamma distribution itself is the generalization of exponential family of distribution adding more parameters to this distribution gives us generalized gamma distribution which is the further generalization of this distribution family, the physical requirements gives different generalization one of the frequent one is using the probability density function as
the cumulative distribution function for such generalized gamma distribution can be obtained by
where the numerator represents the incomplete gamma function as
using this incomplete gamma function the survival function for the generalized gamma distribution can be obtained as
another version of this three parameter generalized gamma distribution having probability density function is
where k, β, θ are the parameters greater than zero, these generalization has convergence issues to overcome the Weibull parameters replaces
using this parameterization the convergence of the density function obtained so the more generalization for the gamma distribution with convergence is the distribution with probability density function as
Beta generalized gamma distribution
The gamma distribution involving the parameter beta in the density function because of which sometimes gamma distribution is known as the beta generalized gamma distribution with the density function
with cumulative distribution function as
which is already discussed in detail in the discussion of gamma distribution, the further beta generalized gamma distribution is defined with the cdf as
where B(a,b) is the beta function , and the probability density function for this can be obtained by differentiation and the density function will be
here the G(x) is the above defined cumulative distribution function of gamma distribution, if we put this value then the cumulative distribution function of beta generalized gamma distribution is
There are different forms and generalization of gamma distribution and Gamma distribution exponential family as per the real life situations so possible such forms and generalizations were covered in addition with the estimation methods of gamma distribution in population sampling of information, if you require further reading on Gamma distribution exponential family, please go through below link and books. For more topics on Mathematics please visit our page.