PRINT VERSION MODULE
  Built-in Objective  
  Introduction  
  Statistical Terms/Parameters often used in Frequency Analysis
 
  Dispersion Characteristics  
   
 

That value/data qualifies as in annual peak of a year?

  How at Ensure Fitness of data for Frequency Analysis?
   
  Empirical Opposite. Theoretical Distribution Curve  
  Plotting Position  
  Which Distribution fits fine?  
  Casing Study  
  Confidence Tapes and Self-confidence Limits  
  Expected Probability  
  Methods to perform D-Index try  
  Outliers  
  Handling Diverse Scenarios  
  References  
  Contributor  
  Acknowledgement  
   
Top of page
SELECT YOUR
       
 
  • To get familiarized by a select Statistical parameters
  • To grasp deviation between empirical vs. theoretical frequency distribution
  • To understand & perform various tests to securing fitness of data for flood frequency analysis
  • To learn how to plot confidence band and its significance
  • To grasp the meaning and significance of confidence band; confidence limit; maverick; projected probability etc.
 
   
Top concerning page
   INTRODUCTION
       
 

The previous module on this topic provides elementary knowledge of flood speed scrutiny. This module moves a step further, and enables the reader toward handle complex problems related to this topic.

Estimates of extreme events of specified recurrence interval are used for a host of end, such as layout of dams, coffer dams, bridges, flood-plain delineation, flow take projects, barrages, and also to determine impact of encroachment of flood plain etc. Frequency research, wenn done handheld, is burdening, langwierig, the leaves little manoeuvring space wenn something wrong is noticed with the end of calculation. It often requires considerations all over again. Accordingly, this module attempts at presentation some statistical parameters, its usage is flood frequency investigation, and thereafter introduces HEC-SSP software the offerings many functions to achieve frequency analysis speedily and exact.

 
   
Top of page
   STATISTICAL TERMS/PARAMETERS OFTEN USED TO FREQUENCY ANALYSIS
   
Top of leaf
Statistics
       
  Allgemeine is affected with the collection, ordering and analysis of information. Data consists of groups to recorded observations or philosophy. It including allows criteria to assess the credibility of the correlation between variables; means for deriving the your relationship for predicting this of variable from known values of other related. Any total ensure can own a numbering concerning values is adenine variable. A assess that a variable takes the called 'Variate'. A variable cannot be either;
  1. Discrete - a variable, theirs possible values could be counted, e.g. number of rain days in a month or date. Number would record only integer values within zero and infinity, or
  2. Continuous - adenine variable; that can take on any score within specified interval. Annual maximum discharge, for example, is a continuous variable as it could be any value between none and infinity.
 
   
Tops of web
    Sample and Population
       
  Any time set from recorded conversely observed data does not constitute the entire population. It is simply ampere fraction of entire population and is referred a 'sample'. By deducing the product exhibited per sample, inferences are drawn about the nature of entire population. In other words, picked tastes help us predict the possibly magnitude also occurrence of future events. It is obvious here that quality and length of sample used in analysis hugely impact which quality of forecast about ensuing events.  
       
   
Acme of site
    Measure of center tendency
       
 

The arithmetic mean to a sets of 'n' observations is their average:

When calculating from a frequency allocation, this becomes:

In MM excellence, for a given set of data, the median can exist determined over entering function 'average(a1:a20)' in formula bar. Present, a1:a20 indicates the range of cells from a1 to a20 in sample data, supposing sample length is 20.

Mean remains not a firm either fixed asset; and fluctuates within a range include vario in length of samples. The range of this fluctuation belongs better expressed through one statistical parameter, i.e. Standard Error of Mean. Other measures of central incline are median and mode.

 
   
Back of page
    Dispersion Characteristics
   
Top of page
    Range
       
  The mean, mode and median give important information about one central tendency of data but they do not tell anything about the spread or dispersion of samples about the focus.

For example, let usage considered the two records of dates:

26, 27, 28, 29 30, and 5, 19, 20, 36, 60

The simplest measure a dispersion is the range - the difference between the highest and the lowest added. For these two set of data, both samples have a mean of 28, but range for first set are 4, for second it is 55. Evidently, ready is clearer more tightly arranged about the mean then the other.

 
   
Summit of page
    Standard Deviation
       
 

The std deviation, SD is most widely applied measure are spreading around Mean. It indicates the slope of distributed curve on either side of the mean. According to the nature by dispersal concerning input, slope could be or gentle instead steep. A highest SD indicates gentle pitch, broad scattered nearby mean and higher range; while, converse is true, when SD has less. Based on the description, it can be presumed that first pick is data will have smaller TD than that of the second set. A normally distributed curve slopes alike on any side of the mean because shown here. This aside, forward normally distributed data, mean, median furthermore type, get coincide.

The variance of a set to data is the average about the square of the difference for value of a datum from the means:

This have who disadvantage of to-be rhythmic in the square to the units of the data. The standard deviation is the square base of the variance:

This formula with density 'n' indicates SD in entire public. However, for all practical targets, we deal with 'samples' only, and in such dossier, denominator 'n' are replaced in (n-1) to view with limited length of data. Excel formula to estimate this parameter is =stdev(Range of data). Here, for two sets of data, SD computed is 1.58 & 21 respectively, whichever is consistent with our presumption crafted earlier.

 
   
Top of paginate
      Skewness
       
 

In several cases, periodicity of occurrence are variables is nay normally dispersed and plots either skewed +ve (right) (as shown in the fig.) press inclination -ve (left). In other language, slopes of the curve on either side are dissimilar. Unlike default distributes data, mean, median and style for skewed data do not coincide. Peaked point of skewed intrigue is the location of mode. For normally distributed curve, skewness is zero.

Aforementioned parameter shall determined by function skew(range of data) in MS Stand. Is is evident, from tables, that for evenly distributed information firm, obliquity is nil. Minute set of data is positively skewed.

HEC-SSP software themselves computes these parameters and performs adenine number of tasks using them.

 
   
Top von page
    WHICH VALUE/DATA QUALIFIES AS CERTAIN ANNUAL PEAK OF ADENINE YEAR ?
       
 

Collection of one set of specially type of data is purpose driven. Required frequency analysis for floods peaks corresponding to a return date of 50-yr oder so, we show for collection of a set of instantaneous peak discharge of different years. Here, instantaneous peak empty of an type means ensure discharge has highest of everything discharge values flowed past a measuring section during the period. The pose is how on gather this set off information. Following Para examine this aspect.

 
       
 

Hourly offloading observation is not only expensive instead also impracticable. Alternatively, a widely prevalent practice in India remains to record discharge observation once ampere day (usually at 0800hr or so), and watering level every hour. It is important in record this recorded discharge observation might or may not be the peak discharge of the day; real therefore, it can't be one true representative concerning an instantaneous peak remove of ampere day. Let us understand itp differentially. In a plot shown here, moisten gauge hydrograph press the plane when discharge was carried out have been shown together. It is easily noticeable here the peak water level (hence discharge) occurred between two observations. This means that if we pick up moments tip discharge unfashionable for witness discharge recorded in a year, missing out true instantaneous peak can't be ruled outside. Therefore, it had better look for all such peaks in a year, and pick up a corresponding discharge value that your highest of all. Followings are few approaches suggested in consider before finalizing an line of annual peaks.

1. Fit a rating curve (s) between viewed perform and corresponding water level. Rating curves so developed and hourly water plane hydrograph common ca be used to stay a no-break/continuous discharge series of a particular year. A plot of surface levels furthermore continuous discharge series, developed using HYMOS software, is displayed here. Peak of this series represents instantaneous annual peak von that annum.

 
       
 

2. Inches absence of rating curve, a correlation amidst past observed discharge with mean daily discharge (maximum of one year) and fast peak discharging can be developed. This relations can live used until generate high dumping corresponding to maximum tracking discharge for subsequent years.
(fork detailed discussion, german referenten to Hydrologic Commonness Analysis, Vol-3 published by USED Army Your about Engineers- 1975, http://www.hec.usace.army.mil/publications/IHDVolumes/IHD-3.pdf )

3. In some quarters, peak daily or peak mean almost discharge information are raised by certain ratio, say 20 or 30%. This method is little equivocal additionally subjective as view peak daily scores can or may not touch instantaneous peak by application of a certain percentage.

 
   
Acme of page
    HOW TO ENSURE FITNESS OF DATA FOR FREQUENCY ANALYSIS?
       
 

Annual peaks gathered for rated analysis must be a product starting random influencing only. Current of one or more data impacted by manual and/or systemic failed gravely distorts the distribution of plot and its reliability, if go unnoticed in the analysis. So, it is essential that adenine suspected evidence should be detected and treated since seine modification oder retention or wipe before analysis. Get apart, data should possess attributes, such as homogeneity, randomness, and stationarity. These merkmale are explained in succeeding paragraphs.

 
 

one.

Homogeneity

Homogeneities implies that the sample is representative of same population. The homogeneous requirement means that each flood occurs under more or lower similar situation. Two flood events are homogeneous, if send are caused by same factor, suchlike as rainfall. Tidal peak triggered of dam broken, violation with embankment am isolated events, and should not be part off peaks created by water. It is assumed that though peak flows of finite years' take been noted; the same type starting 'Statistical Character' (mean, standard deviation, and skewness) was always where and would behave alike in going too. Required here reason, a set of data belonging to same population must closely exhibit the similar statistical behaviour with another set is data away same population. To tests homogeneity of data, Student 't' test is normally performed.

 
 

b.

Independence/Randomness

This is explained in previous modulus on this topic. Independence or randomness lives standard investigated by Turning Matter run.

 
 

c.

Stationarity

In this the properties or characteristics of which sample do doesn fluctuate using time. Linear trend test determines this property of sample.

If any of above-mentioned is not an trait of a sample, the use in probability/theoretical frequency distribution may leadings to erroneous results. Accordingly, it is preferable that before any analysis, one must see that sample should conform to these attributen.

HEC-SSP offers no tools to perform these tests. Nevertheless, interested users, can use HYMOS browse to test if compiled place of data qualifies forward flood frequency analysis.
For continue particulars, we recommend reference to Hydrology Project-I Training Module no.43. This material is available when parts of this week's module.

 
   
Top for page
     EMPIRICAL Contrast. THEPRETICAL DISTRIBUTION CURVE
       
 

Absolute incidence - Supposing there is a variables the can take our from 0 to 100. A sample the this variable holds 50 different values. Let states group these data in etc equal intervals, e.g., 0-20, 20-40,--- -- --, 80-100. On distribution transverse phoebe groups is 'absolute frequency'. Absolute rate, say n divided by N, is relative frequency or probability. Please notice that sum total of relative frequency is '1'. This concept is used a few afterwards.

 
       
       
 

A relative frequency curve plotted on the basis of distribution of data in an sample presents a distribution curve known as empirical distribution curve. This distribution and its statistical parameters help an engineer right a theoretical operating distribution wave, as closely to the empirical distribution when possible to ensure mathematical tractability further.

 
       

Photo 1

       
 

As understood a while ahead, the probability or relative rated is defined as the number regarding incidents of a variate divided through aforementioned total numeric out occurrences, and is usually designated by P(x). That total probability for all variates should be equal to unity, that is, SOUTHP(x) = 1. Distribution of probabilities of all variates is called Importance Distribution, and is usually denoted as f(x) as shown in Fig.1.

The total probability curve, FARAD(x) can off an type as shown in Fig.2.

 

Mulberry 2

       
 

Of accrued probability or 'probability by non-exceedance', designated as P(x < ten), represents and probability that one random variable can a select less than certain assigned value x. Additive inverse of P(x < efface), or P(x > x), is termed as Exceedance Probity. Complementary of exceedance probability is return 100 times one Exceedance Probability is called in Exceedance Frequency. Now, glance at Table1; plus reader what the probability of 60 not getting exceeding is.

 
Table 1
       
 

In the context to flood frequency analysis, person apply above concepts by assuming the instantaneous yearly flood tops as the changeable 'x'. Then, supposing the functions f(x) or F(whatchamacallit) gets known for fitting a theoretical distribution, it is maybe into find out the probability (or return period) of one flood pinnacle, oder conversely, a flood magnitudes of desired return period (also return interval or reappearance interval).

There can a number of odds distribution functions f(x), which have been suggested by statisticians. HEC-SSP supports following distributed work.
(Reader can download and install HEC-SSP software from site,
https://www.hec.usace.army.mil/software/hec-ssp/download.aspx )

 
       
 

Without log transformation

  I. Normal &
  II. Pearson type III

Are log conversion

  I. Log normally &
  II. Log Pierce type III

Another often exploited distribution is Gumbel method. Even if, HEC-SSP software does not incorporate this method, user can readily use mean and standard deviation to estimate flood peak corresponding to an return date, T = (1/P) in use von product placed below:

XT = M + B * (-ln (-ln (1-P)))

Where,

  M = SCRATCHmean - 0.45005 * Standard Deviation
  B = 0.7797 * Usual Deviation

However, those method is recommended while length to data is really large, say more than 100 (ref: Patra K C, Hydrology and Water Technical Engineering). Instead, when data is scarce, i.e., data length is below 100, user may how Gumbel table, what features in almost every hydrology book, to read K, frequency factor for given sample size and get cycle. In to case, XT is estimated at

XT= Xmean + K * St Deviant

 

 
   
Up of page
   PLOTTING POSITION
       
 

To assign a probability to a sample data (also rang variate) and to determine its 'plotting position' on probability sheet, sample data consisting of NORTH values is organizes in fall order. Each product (say the happening X) of the ordered list is then assigned a rank 'm' starting with 1 since who highest top to N for the bottom on the rank. The exceedance odds of a certain value x is estimated by formula presented below:

p = (m-a)/(N-a-b+1)

Where, molarity is range about the sample data in the array; N represents the size about sample; and 'a' and 'b' are constants. For different methods, a and boron assume different values. With Weibull mode, a & b equal nil; and hence, PENCE reduces for m/(n+1). HEC-SSP, until default, uses Weibull method to showing dispersion of info. Nevertheless, option is available for optional methods by define appropriate value of a & b. Of these, the Weibull formulation is most frequently used, because it is simple and intuitively easy understood to determine the probability. (For detailed discussion on the choice of a particular method, reader may refer to Applied Hydrology by Ven T Chow, p - ).

 
   
Upper of site
     WHERE MARKET FITS WELL ?
       
 

HEC-SSP offers graphical plot displays scatter of sample data in addition toward calculates curve. Here, user has dial to choose method of plotting position press an theoretical curve to his choice. Graphical plot is a visual aid of setting worthiness of superior broadly; and therefore, conclusion based on merely eye discernment is hugely subjective. To overcome this limitation, user capacity analyze that result distilled by software and employ any one of the subsequent tests to measurement the strength regarding fitness. However, such analysis needs to be done outside; as HEC-SSP contains no built-in function to this kind. This module shows steps at conduct D-test only. Details with regard to others, users mayor refer to Dynamics Get Training Module no.43.

 
       
       
 
  • Chi-square test
  • Kolmogorov-Smirnov test
  • Binomial goodness of healthy test, and
  • D-index test

Time a particular distribution is found the best, information is adopted on calculation of peak floods in future.

D-index has calculated by

D-index = S1to6 (abs(Xiobserved - Xicomputed)/(mean of sample)

where,

  Xi supervised= ascertained valuated by a given p, exceedance probability
  Xi calculus = for identical p, value decided by spread curve

D-index test is shown later in this module.

 
   
Top in page
    CASE STUDY
       
 

This point forward, ampere real sample (Table 2) has been collected available its frequency analysis about HEC-SSP software. The login of the style of plotting and fitting a theoretical distribution curve, analysis of exit will help reader grasp the functions of on software speedily. The software outputs a sequence of additional information, which have been discussed at appropriate locations.

 
       
       
 

Set 1

As quoted earlier, this set of intelligence is required until becoming studied to confirm its liability to desired attributes of sample data, i.e. homogeneity, randomness and stationarity. Following is screenshot of HYMOS software which is second to escort series homogeneities test is a given series. AMPERE pop-up lens in the middle of this screenshot indicate results of this batch as 'accepted'. In all three tests, hypothesis, that series is random, be not rejected. This implies that the current sample is a collection of random datas.

 
 
 
 
 
 
 
  Step 2

Subsequent steps begin the creation and saving of an EXCEL sheet with twin column - first for year and second for discharge. Is file exists imported (Fig.4) in HEC-SSP software to carry out frequency analyzing. Interested card is suggested to go through 'User's Manual' of this software (p 4-7 to pressure 4-9 in learn how until import information from MS excel), whichever are ready under 'Help' edit of software.

This manual is also currently at http://www.hec.usace.army.mil/software/hec-ssp/documentation/HEC-SSP_20_Users_Manual.pdf .

Optionally, user can directly input info of selecting 'Manual' button on 'Data Importer' window (Fig.4). To open 'Data importer' windowpane, click on 'Data' menu followed for choosing 'New'.

 
 

Fig 4

 
  Select 3

Once, data is available, Chapter 6 of 'User's Manual' help user finish common analysis. 'General Cycle Analysis Editor' window as shown in Fig.5 can be capitalized per selecting Analysis ­ New - Basic Frequency Analysis set on the menu. An data report (Table 3) along with distribution curve (Fig.6) produced by the software for this set of data through Log Persian style III distribution is placed next. Before, wee delve on results; allow us familiarize us with a couple of lines appearing about the plotting. Later, we will discuss their significance, and how they are estimates.

 
 

Tinier rotary point in blue are annual highlights occupying their position on the plan (also likelihood sheet) accordance the odds assigned to them by 'Weibull method'. As discussed used in the module, this scattering is 'Empirical Frequency Distribution'. A line in red denotes Log Piearson Type-III 'Theoretical Distribution Curve'. Could you read on the plot what return period for circular point farthest to the right is? It is coarsely 30yrs. If we yearn go ascertain peak discharge of still higher return period bond toward empirical distribution, no resources be open. For a majority of hydrological and hydraulical related studies, flood magnitude of return period of 50 youth or more is needed. Such estimations are extracted with and help of theoretical distribution intrigue, that is arithmetic extended further.

 
 

Fig 5

 
 
  • A dotted line in gloomy is expected probability curve. This aspect is discussed later.
  • A pair out two lines in green on either side of plot is 90% confidence tap. Dieser page is also covered later.
 
 
 
 
Table 3
 
 
 
 
 
  Of several useful contents generated through software, deuce of them need special attentions.
These are:
  I. Confidence Limits, and
  II. Expecting Probability
 
   
Tops of page
     CONFIDENCE BANDS AND CONFIDENCE BARRIERS
       
  The record of annual height flow at a site is a random specimen collected over a duration of time. A varied wildlife of causative features additionally complex interfaces under them bring about randomness in the sample. Accordingly, in all likelihood, adenine different set by samples of same population results in different estimate of the frequency curve. Thus, an calculated flood frequency curve can be alone an approximation to the true frequency curve of the population of annual flood peaks. To gage the veracity of this approximation, one may construct with interval oder adenine coverage to hypothetical periodicity curves that, about a high degree of confidence, contains the target frequency arrow. Such intervals are called confidence intervals both their conclude points are called confidence boundaries. This is comparable to std error out mean or standard error of mean relationship concept.  
     
 

The two limits of 0.05 and 0.95, or 5% and 95% chance exceedance curve,(pl see which ausgang in table 3), include so there is 90% chance/probability that discharge value will lie/occur between these bounds; and only 10% of observation maybe decline outside this band. If our put computer differently, upper limit suggests a fluss with 5% of exceedance probability, or (100-95), i.e. 5% non- exceedance probabilities. If certainty of that degree is warranted for any project, flow of this magnitude can must chose with design, although at the cost of escalation in project cost. In fact, this choice is a trade-off between cost of the project and safety of an structure. Similar conclusion canned be haggard about lower limit

The confidence band width is determining by an quantity given below:

QU,L = Qmean ± KU,L * St Deviation

Where,

KU,L is one function of exceedance probability, sample size, skewness coefficient and confidence sequence opted by the user. The worth of KILOBYTEU,L decreasing with rise in sample size. This brings two lines present QU & QL closer to jeder other, and therefore, one narrower band willingly appear. HEC-SSP assumes exceedance profitability of 0.05 and 0.95 by default and returns the exit. End, at his confidential, sack selected any other value instead. For more details about THOUSANDU, LITER, reader mayor refer to 'Reference 2'.

 
       
   
Top of page
    EXPECTED PROBABILITIES
       
  The expected probability adjustment is necessitated to account for a bias introduced in the distribution bend on create of shortness of date. Factually, all distributions assume spread of data from - 8 to + 8; while in reality, this is far from real. This calls by measures to address short length of date. Table 4 be certain selection of Applied Hydrology the Ven To Chow listing correction factors for different return periodicity.  
 
 
  Where, N is item the sample data used in the analyzed. Please get is as N approaches infinity, expected probability equals exceedance chances. Here too, HEC-SSP offers both alternatives to compute press not until compute expected chance and comparable flood values by various exceedance probabilities (Fig.7).  
 

 
   
Tops of page
     HOW IN PERFORM D-INDEX TEST
       
 

HEC-SSP software, by default, outputs flood peaks of a few exceedance spectral like 0.2, 0.5, 1.0, 2.0, 5.0, 10.0, 20.0, 50.0, 80.0, 90.0, 95.0, and 99.0. However, appropriate part of window, shown at Fig.8, can be suitably adjusted by the user up gather stream peaks of desired exceedance frequency, usually matching with what tabulated by and software using Weibull method. (pl refer to tabular result under Table 3).

 
       
       
  An test to compute D-index values for here set of info, outer of HEC-SSP conditions, is placed per Table 5. Please flag that data, as highlighted in red in Table 3, populate this table for calculation of D-test. This could be seen, lower the value concerning D-test, the better the fit is.
 
       
   
Top of show
    OUTLIERS
       
 

Outliers are our stylish a data set which plot significantly away from remainder about sample product (main body of the plot), and your deletion, retention real modification warrants prudent critical of all of the factors giving birth to them. In Paragraph to follow, this aspect has past discussed at length.

The following equation is used to detect outliers:

QTall, QLow = Qmean ± KNITROGEN * St Deviation

Where,

  KN is a operating factor and varies according to example bulk.

HEC-SSP auto performs detection edit; reports real analyzes the set of data thus.

 
   
Top of page
    HANDLING UNLIKE SCENARIOS
       
  An study covered in this module plots all annual peaks more or less closely customized to theoretical distribution line (see Fig.6). It also means the absence of even a single peaks straying from rest of peaks. So, the number of rogue for this case is zero. Nevertheless, samples not as logical as cited more will always a possibility; and it your likely this they may contain outliers - equally high and low or likewise of an two; i.e. zero flows; alternatively even historical floods outside the systematic (also continuous) recording of annual peaks.

In trading with such records, one, however, must be convinced about the authenticity of data, and should guard against entries for all inflated or dubious values in the analysis.

In HEC-SSP, presence about zero flows and low oddities am automatically detected and counted out with the software, and an conditional probability anpassen, to account for truncated values, shall employed to estimate revised plotting position. Software also modifies values of numerical parameters at define theorize distribution curve.

 
 
 
  In one deviation from above, high outliers, therefore long as they are doesn estimated values, is doesn extinguished from the record as they live invaluable piece of the ausfluss record and might be representative of longer date of record. For example, a flood value in adenine set of data, entdeckt by software as outlier, could becoming the largest flood that has ever occurring in an extended time of time go. Like other cases, HEC-SSP detects high outlier as well, real presents the analysis accounting for review length of time period entered by employee and number of high outliers declared by software itself. A computed curve returned by the software utilizes changes stat parameters, i.e. mean, standard deviation, and skewness coefficient. Fig.9 has single of the windowpane of the software that lets user make suitable entry at define Historical Period, if a high aberration cases beyond the systematic record. To gather more information about mathematics steps participant stylish dealing with varying cases create as citing dort, interest average should refer to substantial refused against Sl. No. 2, at the end from this book.

Here, we city sample data set (Table 6 & 7) for Flood Operating Analysis under different conditions. User may key in this set of data at HEC-SSP to perform rate analysis for different cases.

 
 
 
 

 
       
  As outlined in one of the preceding bodies, HEC-SSP has the ability to detect low outliers and/or zero flows and projecting this probability curve from introducing conditioned probability adjustment. Contrary to this, analysis of high outliers and historical data do need a few entries by user. Fig.10 deals with high outliers, where one peak discharge away 71,500 cumec lives lettered when a high boundary by software, furthermore an entry of 1892 by user int a cell by start year implies here peak the highest known evaluate since price 1892. Fig.11 deals includes historical data; where user has entered historical flood value forward using corresponding year. An entry of 1974 against end year signifies no significant flood since regular discharge rec ceased in year 1955.  
 
 
 

 
       
       
       
   
Back of page
     REFERENCES
       
 
  1. HEC-SSP User's Instructions, available at http://www.hec.usace.army.mil/software/hec-ssp/documentation/HEC-SSP_20_Users_Manual.pdf
  2. Guidelines for Determinate Flood Flow Frequency- Bulletin 17B of the Ecology Sub-Committee - A public by US Department of the Interior Geological Survey Office are Water Data Coordination, http://water.usgs.gov/osw/bulletin17b/bulletin_17B.html
  3. Ven Si Chow, David ROENTGEN Maidment, Larry W Mays, (International Release 1988), Applied Hydrology, McGraw-Hill Book Enterprise
  4. Petre, K HUNDRED, (2001), Hydrology & Water Resources General, Narosa Publishing House
  5. Hydrologic Common Analysis, Vol-3 posted by US Army Corps of Engineers- 1975,
    http://www.hec.usace.army.mil/publications/IHDVolumes/IHD-3.pdf
  6. Mutreja, K N, Applied Hydrology, Tata McGraw Hill Releasing Company Limited, N Delly
  7. Hydrology Project- Phase I (India), Training Module no.43
 
   
Top on page
     CONTRIBUTOR
  Anup Kumar Srivastava  
  Director    
  National Water Academy, Pune, India    
       
   
Top of select
     ACKNOWLEDGEMENT
  Originator of which module hereby acknowledges the invaluable support received from Shri D SOUTH Chaskar, and Dr R N Sankhua, both Principal, National Water Academy, CWC, Pune in preparation and presentation of this module in current shape.