Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Clearly this means sampling every 0.208333 milliseconds and 0.069444 milliseconds respectively.

This means that the interim UCA IUG Guideline known as IEC 61850‑9‑2LE is defunct – no longer needed.
In any case it was not a Standard as such, but just an interim “gentleman’s agreement” of sorts to give some basis of all vendors having interoperable devices – there was no mandatory compliance requirement and some of its specification was out of date, not really practical and in fact not quite good enough from a performance aspect anyway such as “1PPS over glass fibre”.

IEC 61850‑9‑2 Sampled Values applies to all the 20+ T-group sensors (T Group - analogue samples) - of which only two relate to CT and VT sensors (Logical Nodes TCTR and TVTR).
IEC 61850‑9‑2LE and now IEC 61869-9 "simply" provide the parameters associated with configuring the Merging Unit's generic Multicast Sampled Value Control Blocks "MSVCBxx" to suit CT and VT applications (you could for instance use TCTR to provide sampled values of the battery d/c/ current once every hour but those parameters would not suit CT and VT applications).

However have you ever wondered what the difference in sampling rates really means in accuracy?

When an IED is sampling, it only knows the value of the waveform at the last sample.  There is a step-change in "known value" every time a new sample is taken. Obvious really.
that means the IED must take samples sufficiently frequently to satisfy the performance requirements and accuracy requirements of the function itself. 
Of course using faster sampling is no problem for the function as it can "pick and choose" which ones it uses - it can even "re-sample" a derived waveform at a different sampling rate.
However using unnecessarily faster sampling increases the bandwidth requirements which puts additional performance requirements on the IED port itself as well as the LAN bandwidth capabilities to distribute all messages with the required latency.

Those issues are not new as we have long used protection class CT/VT to derive measurements of Amps, Volts, Power, Frequency even when the current was less than 10% of rated where the CTs are working in their ankle point, or that even at their accuracy limit factor they are only 5 or 10% accurate.  But measurement of those quantities associated with Revenue Metering for billing purposes needed to be far more accurate so much higher accuracy CTs were needed as well as higher accuracy metering devices. In the old days of electromechanical "disc" meters, there was no sampling and the accuracy was purely based on the physical construction of the meter.

So how much more "accurate" is a sampling rate of 14400 Hz that is three times 4800 Hz?.
This is much like the question of the impact of different GOOSE repetition rates and whether the network is flooded when an event occurs
For most instances it is inherently obvious but sometimes it is nice to know the "mathematics" behind it - e.g. GOOSE "Flooding" .

Of course there is the accuracy of the actual sampling process - i.e. if the value is "102.7612 A", does the digital sample show that as precisely that within a small percentage e.g. ±0.5% would give up to "103.2750" or as low as "102.2474".
That is clearly dependant on the vendor's design of the CT/VT sensor (wire-wound or optical) and the Merging Unit's hardware, just like the old CT and VT and the electromechanical meters. 

But when it comes to sampling frequency, it is a question of how close the samples represent the actual waveform - we know that sampling at 100 Hz will just give the IED two samples per cycle so it won't know what the waveform really looks like.  

If we compare the actual sine wave to the known value determined by the IED, we can see the effect of the different rates of step changes in calculated values as shown in this 10.9 milliseconds of a 50 Hz cycle below (a little more than half a cycle):

...