Supporting magnitudes (values + units) might be a good complement for the current vector functionalities of System.Numerics
. It would allow .NET to deal with any Physics-based situation in an intuitive and easy way. What I am proposing here might also be adapted to other not-related-to-Physics requirements, like angles or computing units (e.g., bit/byte).
The new Magnitude
class is expected to be defined in such a way that all the operations and unit conversions might be easily performed. The idea is allowing programmers to interact with values+units as intuitively as they currently do with just values. For example: having 5 m/s and wanting to convert it to in/s or to add it to 3 ft/h.
Rough sketch of what I have in mind:
``` C#
public class Magnitude
{
public static Magnitude operator +(Magnitude first, Magnitude second)
{
//It is necessary to confirm whether "type" & "measurement" are compatible and eventually perform all the required conversions.
return new Magnitude((decimal)first.value + (decimal)second.value, first.measurement.unit.ToString());
}
//Further basic operator overloads.
//Other methods performing basic operations among Magnitude instances.
public readonly MagnitudeType type; //The type of the instance has to remain unaltered.
public object value { get; set; } //The getter will be defined on account of the "measurement" properties.
public Measurement measurement { get; set; }
public Magnitude(decimal value, string unit)
{
this.value = value;
//The "unit" value will be adequately parsed and analysed.
//The version below is a simplistic version to show the idea.
if (unit == "m")
{
this.type = MagnitudeType.Length;
this.measurement = new Measurement(UnitSystem.InternationalSystem, LengthInternational.Meter);
}
else if (unit == "ft")
{
this.type = MagnitudeType.Length;
this.measurement = new Measurement(UnitSystem.ImperialSystem, LengthImperial.Foot);
}
}
//Other overloads accepting different arguments.
}
public class Measurement
{
public UnitSystem system { get; set; }
public object unit { get; set; }
public Measurement(UnitSystem system, object unit)
{
this.system = system;
this.unit = unit;
}
}
//Very simplified versions showing how the enums are expected to look like.
public enum MagnitudeType { Length, Weight }
public enum UnitSystem { InternationalSystem, ImperialSystem };
public enum LengthInternational { Millimeter, Centimeter, Meter };
public enum LengthImperial { Inch, Foot };
Simple example showing how the aforementioned code is expected to be used:
``` C#
Magnitude var1 = new Magnitude(1.234m, "m");
Magnitude var2 = new Magnitude(5.678m, "ft");
Magnitude var3 = var1 + var2;
I can only think of one drawback for the proposed approach: relying on enums in a situation like this (i.e., with many alternatives and classifications) implies hardcoding a relevant amount of different scenarios. In any case, I think that it would be worthy anyway.
CLARIFICATION (from my previous contributions, I understand that this issue should be evident, but just in case...): I am expecting to take care of the proposed implementation completely by my own and also to deliver a comprehensive-enough first version. The initial lack of detail is exclusively motivated by the fact that I prefer to not spend too much time on this before confirming that the .NET community/team likes it.
If at all possible, I would want any system for units of measure to be type-safe: adding meters and kilograms certainly shouldn't compile and adding meters and feet without explicit conversion probably shouldn't either.
Type-safe units of measure already exist in F# and there is a proposal to add them to C# (dotnet/roslyn#144). There's also Gu.Units, which at first sight looks nice.
@svick The problem with your proposal is that the associated complexity is much higher.
After a quick look at the F# alternative, it seems different than what I am proposing here: with this approach, you have to redefine the units every time, against the expected intuitiveness and low-effort. In any case, I didn't know about that (either about future porting plans) and it might be interesting to also support this option (i.e., user-defined units).
Regarding the library and also after a very quick look, it seems to deliver something equivalent to what I am proposing although the approach is much more complex.
In any case, the proposed parsing-based approach is also very safe. For example: you can create an algorithm accounting for "m", "meters", "meter", "metre", etc. all of them understood as the unit meter. Also I was planning to account for complex units, like "m/s" not being understood as meters (space) and seconds (time), but as meters per second (velocity); equivalently, the algorithm would also accept "meters/s", "m/sec", "m/seconds", etc. One of the advantages of this alternative is that it makes any automatisation very easy; for example: reading all the information directly from a file and letting the parser automatically understand the units.
@svick Sorry but didn't clarify this point in my previous message: the proposed code will never mix different units or different systems of units up. Note that the main definition (length, weight, speed, etc.) is given by MagnitudeType type
, which is created on instantiation after having parsed the inputs and cannot be modified later (in other overloads its definition might be different, but no-mixing-up will always be a top priority). The remaining definition layers (system of units or units) can be modified at any point, but only within the given range of accepted values; that is: a specific enum
(e.g., LengthInternational
containing only length units in the IS).
You don't need one special type per category to make sure that there is no mixing. It is enough with making sure that the main type/class, Magnitude
in this case, doesn't allow such a thing to occur.
You don't need one special type per category to make sure that there is no mixing.
You do, if you want to ensure no mixing _at compile time_. Throwing an exception at runtime (which I assume is what you're suggesting) is not good enough for me.
Such a powerful ready to use feature with the standard units already defined and auto conversion between units can be very useful in many scenarios specifically for the .NET Core to be used in the IoT projects.
@svick Not necessarily an exception. The main type is fixed while instantiating the class, like what happens with any other parsing approach.
For example: decimal.Parse("abc")
triggers an error and decimal.TryParse("abc")
doesn't trigger an error, but you can be sure that only decimal
inputs will pass through in both cases. What I am proposing is the same, the aforementioned new Magnitude(1.234, "m");
is good and that's why a new instance is created, but new Magnitude(1.234, "abc");
is wrong (an exception might be triggered or not) and no instantiation will occur.
Last @svick comment reminded me another issue which I didn't highlight well at the start: conventions when using operator overloads.
My ideas on this front:
Magnitude var1 = new Magnitude(1.234, "m"); Magnitude var2 = new Magnitude(5.678, "ft"); Magnitude var3 = var1 + var2;
, the units of var3
are "m". Magnitude var1 = new Magnitude(1.234, "m"); Magnitude var2 = new Magnitude(5.678, "kg"); Magnitude var3 = var1 + var2;
, var3
equals var1
. I didn't realise that this issue was already assigned (no notification for that). Now I am a bit busy, but I will start working on a more definitive version within the next 1-2 weeks.
I have an idea how to make this flexible, easy to use and compile time checked. I think using a combination of different patterns could work in this case. However this only works for linear operations. I know it looks like a lot of code, but it is simle code that provides a powerful API. It is also just a first draft which could be optimized to reduce the code.
``` c#
public class Magnitude
where T : IMeasurement
{
public double Value { get { return Representation.Value; } }
public string Unit { get { return Representation.Unit; } }
internal T Representation { get; private set; }
public Magnitude(double value, T instance)
: this(instance)
{
Representation.Value = value;
}
internal Magnitude(T convertedInstance)
{
Representation = convertedInstance;
}
public Magnitude<T> ConvertTo(T target)
{
return ConvertResult(target, Representation.ToDefault());
}
public static Magnitude<T> operator +(Magnitude<T> a, Magnitude<T> b)
{
var value = a.Representation.ToDefault() + b.Representation.ToDefault();
return ConvertResult(a.Representation, value);
}
public static Magnitude<T> operator -(Magnitude<T> a, Magnitude<T> b)
{
var value = a.Representation.ToDefault() - b.Representation.ToDefault();
return ConvertResult(a.Representation, value);
}
private static Magnitude<T> ConvertResult(IMeasurement targetType, double value)
{
var converted = (T)targetType.FromDefault(value);
return new Magnitude<T>(converted);
}
}
public interface IMeasurement
{
string Unit { get; }
double Value { get; set; }
double ToDefault();
IMeasurement FromDefault(double value);
}
### Sample measurement:
``` c#
public abstract class LengthMeasurement : IMeasurement
{
public abstract string Unit { get; }
public double Value { get; set; }
public abstract double ToDefault();
public abstract IMeasurement FromDefault(double value);
public static LengthMeasurement Meters { get { return new MeterLength(); } }
public static LengthMeasurement Kilometers { get { return new KiloMeterLength(); } }
public static LengthMeasurement Feet { get { return new FeetLength(); } }
public override string ToString()
{
return string.Format("{0}{1}", Value, Unit);
}
}
internal class MeterLength : LengthMeasurement
{
public override string Unit
{
get { return "m"; }
}
public override double ToDefault()
{
return Value;
}
public override IMeasurement FromDefault(double value)
{
return new MeterLength { Value = value };
}
}
internal class KiloMeterLength : LengthMeasurement
{
public override string Unit
{
get { return "km"; }
}
public override double ToDefault()
{
return Value * 1000;
}
public override IMeasurement FromDefault(double value)
{
return new KiloMeterLength { Value = value / 1000 };
}
}
internal class FeetLength : LengthMeasurement
{
public override string Unit
{
get { return "ft"; }
}
public override double ToDefault()
{
return Value / 3.28084;
}
public override IMeasurement FromDefault(double value)
{
return new FeetLength { Value = value * 3.28084 };
}
}
``` c#
public void Add()
{
var a = new Magnitude
var b = new Magnitude
var c = a + b; // 210m
var d = a - b; // -190m
var e = d.ConvertTo(LengthMeasurement.Feet); // -623 ft
}
## Cross unit operations
Now if we wanted to kick this up a notch we could add further derivations of `Magnitude` to carry the special logic:
``` c#
public class LengthMagnitude : Magnitude<LengthMeasurement>
{
public LengthMagnitude(double value, LengthMeasurement instance) : base(value, instance)
{
}
internal LengthMagnitude(LengthMeasurement convertedInstance) : base(convertedInstance)
{
}
public static AreaMagnitude operator *(LengthMagnitude a, LengthMagnitude b)
{
var defaultSize = a.Representation.ToDefault()*b.Representation.ToDefault();
var area = (AreaMeasurement) AreaMeasurement.SquareMeters.FromDefault(defaultSize);
return new AreaMagnitude(area);
}
}
public class AreaMagnitude : Magnitude<AreaMeasurement>
{
public AreaMagnitude(double value, AreaMeasurement instance) : base(value, instance)
{
}
internal AreaMagnitude(AreaMeasurement convertedInstance) : base(convertedInstance)
{
}
public static LengthMagnitude operator /(AreaMagnitude a, LengthMagnitude b)
{
var defaultSize = a.Representation.ToDefault()/b.Representation.ToDefault();
var length = (LengthMeasurement) b.Representation.FromDefault(defaultSize);
return new LengthMagnitude(length);
}
}
public abstract class AreaMeasurement : IMeasurement
{
public abstract string Unit { get; }
public abstract double Value { get; set; }
public abstract double ToDefault();
public abstract IMeasurement FromDefault(double value);
public static AreaMeasurement SquareMeters {get { return new SquareMetersMeasurment(); }}
public static AreaMeasurement SquareFeet { get { return new SquareFeetMeasurement(); } }
}
internal class SquareMetersMeasurment : AreaMeasurement
{
public override string Unit
{
get { return "m²"; }
}
public override double Value { get; set; }
public override double ToDefault()
{
return Value;
}
public override IMeasurement FromDefault(double value)
{
return new SquareMetersMeasurment{Value = value};
}
}
internal class SquareFeetMeasurement : AreaMeasurement
{
public override string Unit
{
get { return "sq ft"; }
}
public override double Value { get; set; }
public override double ToDefault()
{
return Value * 0.09290304;
}
public override IMeasurement FromDefault(double value)
{
return new SquareFeetMeasurement { Value = value / 0.09290304 };
}
}
``` c#
public void Area()
{
var a = new LengthMagnitude(200, LengthMeasurement.Meters);
var b = new LengthMagnitude(0.3, LengthMeasurement.Feet);
var area = a*b;
var feetArea = area.ConvertTo(AreaMeasurement.SquareFeet);
var km = area / new LengthMagnitude(0.5, LengthMeasurement.Kilometers);
}
```
@svick is this would you had in mind?
There's mention of US/Imperial units like feet here. How would using those deal with the cases where the US and Imperial measures have the same name, abbreviation and symbols but differ in actual measure, such as pints and gallons?
In my proposal we could just create two classes: UsFeetLength
and GbFeetLength
.
Gb
would be inaccurate there, but that aside, what of abbreviations?
Since the abbreviation is only used for the ToString()
method this would have no impact on parsing or calculation result.
And I guess when it comes to user output it shares the same problems we all have with this stuff: You just can't tell by looking at the text.
Wow! this is what I call sudden & intensive participation! I am really busy during the whole weekend; even on Monday. Will come back on Tuesday and participate. I like this proposal a lot.
One sec. Why is it "up for grabs"? I do want to do the implementation myself (-> this was the whole point of starting this ticket; I want to code, not to discuss; but I cannot pull code right away). I thought that I was crystal clear on this front.
I mean... I will accept whatever the rules are, but want to understand them. I thought that saying "I take care of it myself" was enough.
@mellinoe Can you please clarify this point?
up for grabs
means community contribution. I think they assigned this since you are not a project member - I guess. ;)
@Toxantron Thanks for the clarification. Yes, I understand the concept :) I meant that I was expecting to take care of it myself. This is what I did with the previous issue I created. Also I thought that the "up for grabs" was only meant for these issues where the proposer was interested just in sharing an idea rather than writing code. My only interest here is writing code; I created this issue just because this is the right proceeding for new APIs (firstly issue, then discussion, then PM). In fact, the discussion-with-the-community part is my less favourite stage (I had a bad past experience + I like coding rather than discussing abstract concepts).
Anyway... I am not interested in starting an argument. Just want to understand the rules for the future.
Generally I am all for coding myself - just occasionally a quick brainstorming can create better results. ;)
Apart from that I agree with you.
@Toxantron Yeah, something like that.
@Toxantron Your approach is nice and you put more effort on it than myself (shame on me :)). On the other hand, it also shows the kind of having-to-write-tons-of-code problems which I highlighted. It isn't just the associated effort (in fact, it doesn't need to be too relevant, as far as basically consists in repeating over and over similar codes), but also the associated reduction of efficiency. You have to bear in mind that this is a huge reality and delivering a not-comprehensive-enough approach would be almost against its purpose (and even kind of unprofessional: do it properly or better don't do anything); and, when dealing with something so big, each small bit counts. To not mention that we are affecting a much bigger-all-tied-together package (= the .NET Framework; the compiled libraries where this code will be sharing space with many other implementations) and including a new big chunk might even affect its overall performance (certainly the size of the generated file).
I firstly focused on this parsing-based approach not just because of the associated smaller/more efficient code, but also because this is what I want to have as a final user. For example: my proposed Magnitude var1 = new Magnitude(1.234, "m");
might easily be converted into just Magnitude var1 = new Magnitude("1.234 m");
, an approach which, at least for me, is likely to be very useful in quite a few contexts (i.e., just parsing a file directly or with minor string
-based corrections). In case of choosing an approach on the lines of what you are proposing, parsing-like (i.e., Parse
and TryParse
) methods should also be added, what would provoke a further increase of the overall code size. I do like enum
-based inputs in certain specific-enough contexts, but not so much in more generic situations because of what they provoke (i.e., an increase of complexity/code or forcing a reliance on inefficient approaches, like System.Reflection
methods).
In summary, I think that it might be better to start implementing an as low impact (but also as comprehensive) as possible approach. This is precisely what my initial suggestion delivers: just one type + taking care of all the possible alternatives internally with the smallest amount of code + delivering an adaptable input system (=string
-based), useful under the most likely conditions. IMO, your approach would be better as a second-stage improvement, after having a first comprehensive and properly-working implementation in place.
@JonHanna Regarding your "There's mention of US/Imperial units like feet here...", I drew a very simplistic sample just to highlight the main ideas. Since the very first moment, my intention was being very comprehensive on the accounting for all the possible alternatives front. To me, this is the kind of implementation that can deliver a big plus ("wow! This is certainly nice"), but also minus ("nice words, but a good-for-nothing functionality"). This is one of the reasons why I think that an as simplistic as possible approach (i.e., the string
-parse-based one which I am proposing) is the way to go: there are lots of enumerations ahead, so better not over-complicating things at the start.
@varocarbas I understand the benefits of your approach, but unless you limit it to just length measurements you will soon find yourself looking at a 500 lines parsing constructor. And I won't event start on how big the conversion code might get. KISS applies in many situations however as we are working to create the .NET framework to be used by hundreds of different projects and a not some private helper library we should go for the future proof solution.
If you think string input is important I recommend adding Parse
methods like you already suggested. I would however add them to the different measurement types to keep the core class Magnitude<T>
clean.
For example:
c#
public abstract class LengthMeasurement : IMeasurement
{
....
public static Magnitude<LengthMeasurement> Parse(string input)
{
var regex = new Regex("(?<value>\d+\.?\d*)\s?(?<unit>(?:\w+\s)*)";
var match = regex.Match(input);
var value = double.Parse(match.Groups["value"]);
switch(match.Groups["unit"]
{
case "m":
return new Magnitude<LenghtMeasurement>(value, Meters);
}
}
}
@Toxantron I don't see the disadvantage of
500 lines parsing constructor
if it avoids 5000 lines of (less-efficient) code. Additionally, you don't need to write everything together; you can separate it by keywords ("m" or "ft" or "in" or... -> call DealingWithLength
).
Regarding the parsing approach you suggest, the problem is not doing it with just one case but with all the different types which your approach requires. For example: you have 50 new types (or 100 or 500, not sure) and that's why you would have to create 50 new parsing approaches (together with the 50 specific classes/implementations).
I guess that you agree with me that your approach is more complicated than mine; also that this is a quite complex reality (at least, by assuming that a comprehensive enough solution should be built). So, why not keeping it simple at the start?
@varocarbas The disadvantage lies mainly with long term maintainability. Also I agree with @svick that while your approach looks nicer on first sight people will soon get frustrated by not having compile time validation support. Accidentally adding "m" and "sq ft" will throw an exception while my proposal brings the rules of physics into the compiler. Multiplying length by length gives area while dividing area by length gives another length.
5000 lines of (less-efficient) code
More code does not always mean slower code. Most of the code is boilerplate and the number of IL instructions is only a fragment of that. Furthermore most of it can be inlined by the JIT which will give exceptional performance. From my experience in bigger projects 50 well named classes are easier to maintain than 5 huge ones. Especially since we could place LengthMeasurement
and all its derivations into the same file.
In the end it is neither mine nor your decision but lies within the responsibility of a.NET foundation member. Till then I suppose we should let this rest. In the meantime you could still create a nuget package for your solution. I would create one for mine if I hadn't started enough projects already.
@Toxantron
people will soon get frustrated by not having compile time validation support
Does this statement mean that people get frustrated with parsing-based approaches?! How could this be the case? A parsing approach is expected to be used in certain situations (e.g., reading a file), where there is no possible frustration/error just what is expected (i.e., inputs are right/wrong; an error is triggered/not). On the other hand, using a strong-typed approach in this exact situation is certainly very problematic (i.e., having to choose the right type every time when reading hundreds or thousands of records?!). Alternatively, when writing all the code yourself enum
s are certainly very useful. The question is: where is this implementation expected to be more useful? In situations where the programmer types everything or in those where a big set of inputs is fed in a more or less automated way? I personally think that the answer is clearly the second option. If I have to type everything by my own, what is the problem of converting units? I don't need a fancy functionality to perform minor modifications which I might do in some minutes. On the other hand, what about taking a 5000 (or much bigger) lines file with numbers and units and automatically understanding everything? This would be a quite big deal. For example and as suggested by @mafshin, for IoT-related programming.
More code does not always mean slower code
Certainly not (side note: I am a huge defender of ideas on the lines of "stop thinking that reducing the size of code not matter what is extremely important"). But the kind of code which your approach requires is certainly much less efficient than a simplistic string
-based parsing approach.
In the end it is neither mine nor your decision but lies within the responsibility of a.NET foundation member
I am just using this issue as it is expected to be used: free discussion of pros and cons of all the alternatives. I am not blindly defending what I proposed, but reasonably supporting what I think that is the best solution for this problem.
I am just using this issues as it is expected to be used: free discussion of pros and cons of all the alternatives. I am not blindly defending what I proposed, but reasonably supporting what I think that is the best solution for this problem.
Neither am I. I only meant that we have two very different solutions and it will be up to the .NET team to way the pros and cons we listed - I think we covered them pretty well.
Does this statement mean that people get frustrated with parsing-based approaches?! How can be this the case?
That is true, but I think just like int.Parse("1")
parsing should be limited to getting the typed object from the string and then keep working with the object. That is the reason we have int.Parse()
, double.Parse()
, etc instead of object ParseNumber()
. Because in the latter there would be no compiler support for validating numeric operations. Therefore in a typed language like C# parsing should be placed on type of the typed approach instead of embedded into it - IMO. Especially when you consider parsing files you will be happy that conversions are embedded into the objects itself so you don't have to worry when later using it as a + b -c
.
Because in the latter there would be no compiler support for validating numeric operations
Exactly the same than what I am proposing. For example, you have the following input file:
2cm
3ft
5in
10m
15cm
3in
You might use the following code (C#):
``` C#
List
using (StreamReader sr = new StreamReader("myfile"))
{
string line = null;
while (true)
{
line = sr.ReadLine();
if (line == null) break;
inputs.Add(new Magnitude(line)); //Magnitude would automatically ignore any non-length input
}
}
Magnitude allAddition = new Magnitude(0, "ft");
foreach (Magnitude input in inputs)
{
allAddition = allAddition + input;
}
```
As you can see, I am fully maximising the strong-typed nature of C#.
What if I put 2 m²
in there? Either your Magnitude
class only supports length and you end up with a subset of my solution or you have no validation for 2m + 1m²
. Using your example with my approach looks like this (Depending on which class defines the Parse
-method):
``` c#
var inputs = new List
using (var sr = new StreamReader("myfile"))
{
string line = null;
while ((line = sr.ReadLine()) != null)
{
inputs.Add(lengthMeasurement.Parse(line));
}
}
var allAddition = inputs.Aggregate(new Magnitude
```
Is this really more complex? And you would get notified of a type missmatch the moment you try to convert 2 m²
to a length information.
What if I put 2 m² in there? Either your Magnitude class only supports length
I explained it above, it expects to deal with everything internally. That is: you see just one type (Magnitude
), but internally there are as many types as required. All the operations have to be defined (via overloads, as shown for addition) and that's why you have full control on what you allow/not.
Take a further look at my first code and you would see what I mean: the Magnitude
class is defined on account of two properties type
and value
. value
can be changed by the user at any point, but the property type
cannot: it is readonly
; its value is defined while instantiating the variable and the user has no control on it. For example: with an input of "m", the Magnitude
variable would be of type length and International System of units and there is no way to change that. Also when adding it to other variable (or performing any other operation), its essense would be checked such that it can only be related to other variables of the same type. In case of performing operations between variables of different types, there would be certain convention (e.g., the type of the first variable would be taken) and an error might be triggered or not.
In summary and by answering to your question, if you put 2 m² this record would be certainly ignored (by triggering an error or not), either when adding it to the list or when performing the addition. One thing is sure: there will not be any situation where different units might be mixed-up; my proposal admits string
inputs but makes sure that they are associated to the right type. Otherwise, what would be the point of my proposal?
Is this really more complex?
No, it's not. But you are using a parsing-based approach similar to what I am suggesting, what is an addition to your original proposal. So, you would have to write all what you propose + what I proposed to get exactly the same than what you can get by only doing what I proposed.
type missmatch the moment you try to convert 2 m² to a length information
As said, you would also get an error with my approach (or the record might be plainly ignored). It would be like having Parse
and TryParse
(rather than the in-built type mismatch).
But you are using a parsing-based approach similar to what I am suggesting, what is an addition to your original proposal. So, you would have to write all what you propose + what I proposed
Yes. I intentionally added a parsing API on top of the generic API.
to get exactly the same than what you can get by only doing what I proposed.
No. Magnitude<T>
still differns in compile time validation, semantic representation and extendibility either in the same project or any other project. Implementations for IMeasurement
could cover everything from electrical current over mechanical force to thermal energy. I think we covered pros and cons of either solution and the team or the community should somehow determine what is considered more useful for the framework. Chances might be people agree with you an the simpler approach, in this case I can still move mine into a seperate nuget package, or someone else does.
@mellinoe Do you mind to comment on the up-for-grabs issue? I wrote the definitive proposal for my previous ticket, but it didn't get that flag.
Also what about the new Toxantron's proposal? I was planning to start writing (code + test) my definitive proposal right away; but what should I do now? Wait for your (.NET team + community) decision regarding what option will be pursued?
I don't think "up for grabs" makes sense for this, at least from how I think about the label. "Up for grabs" usually means there is a concrete pieces of work to be done, but nobody has committed to doing it yet. In this case, it seems like there isn't even a design locked down, and even if there was, someone would already be lined up to implement it.
As for the discussion above: I have skimmed it, but don't have incredibly strong opinions about this topic. It seems like the whole discussion of string parsing should be entirely orthogonal to how the interface is expressed in terms of the type system. I would expect that any comprehensive measurement manipulation library would include some way to deal with string representations.
The library that @svick linked above: https://github.com/JohanLarsson/Gu.Units, looks pretty intriguing, and is something that I would probably enjoy using if I had need of a library like this. Some things that look nice in the library:
Just from skimming that library, it seems to have a lot of desirable qualities that the proposal in the original topic doesn't. @Toxantron 's proposal seems sort of similar to Gu-Units, but a bit different as well. In my opinion, though, Gu-Units' approach seems to be the best, perhaps because it is the most fleshed out, and has an actual implementation.
That said, I'm skeptical that we will be able to find a "one-size-fits-all" API shape that will be suitable for the BCL for something like this. I feel that this may be a bit of an opinionated area to step into for the BCL (the discussion above is a bit of that), so I'm not sure we'll be able to find something that works well for everyone. Will we be able to add a lot of value by providing a library like this when things like Gu.Units are available for use already?
@mellinoe Thanks for a so detailed answer.
As said to @svick, this other library is interesting but not exactly what I am proposing. My proposal is mostly focused on highly simplifying the management of more or less complex set of inputs (i.e., reading a file). Your (and @Toxantron's) proposal of adding a parsing layer on top of a more comprehensive implementation would imply a too big/complex resulting code.
My answer to your last question is: not too much (to not mention the fact that including a so big implementation might even have a negative effect on the whole .NET performance). That's why my suggestion of letting Toxantron's proposal (similar to that library, regarding overall goals and code size) as a second-level improvement; something to eventually do in the future, after a first much simpler implementation has been proven useful.
I think that the sample code in one of my last posts provides a very good idea about the final goal here: automatically parsing big amounts of unit-based data, what can be very useful in many contexts; not something which the other proposals deliver (not in their most basic configuration). Additionally, my proposal involves a quite small amount of code (much smaller than the others), what seems required for a first-time implementation. The idea would be keeping it very simple on all the fronts except regarding the type of units to be supported; by bearing in mind that this is expected to be mostly managed via simplistic string parsing and enumerations (i.e., possible to account for a big amount of situations without drastically increasing the code size or decreasing its performance).
My question for you is: can you find a library automatically, easily, efficiently dealing with units in as big as required sets of inputs? Or even better: shouldn't .NET start considering to support such a reality? We are not talking about a small issue only relevant for a few, but about something which is present everywhere.
My proposal is mostly focused on highly simplifying the management of more or less complex set of inputs (i.e., reading a file).
Like I said above, I think string parsing should be a completely separate discussion from the design of the library. It seems like a baseline of functionality that would be required for any library like this to be useful. The purpose of string parsing methods are to transform input data into specialized domain types. It's more interesting to talk about how those specialized domain types are designed and used than how they are transformed from strings, in my opinion.
to not mention the fact that including a so big implementation might even have a negative effect on the whole .NET performance
Just because there is a large surface area in the linked library doesn't mean it's slow. On the contrary, I would expect it to perform very well, because of a couple of the points I mentioned above:
Object
by default. That won't be good for performance.That's why my suggestion of letting Toxantron's proposal (similar to that library, regarding overall goals and code size) as a second-level improvement; something to eventually do in the future, after a first much simpler implementation has been proven useful.
This is a difficult approach to take in the BCL. Adding a "second-level" improvement is only possible if it builds directly on top of the first part in a coherent way. In this case, it doesn't seem like it would. When we consider adding components to the BCL, we have to think about a lot of different things, like future-proofing, extensibility, etc.
My question for you is: can you find a library automatically, easily, efficiently dealing with units in as big as required sets of inputs?
I think if I needed a library similar to this, I would give Gu.Units a shot and see how it worked in practice. Unfortunately, I haven't used this kind of library in any real application, so I don't know the particular pitfalls that a library should solve, and whether or not Gu.Units solves those is something I'd have to find out from trial-and-error.
@mellinoe
Like I said above, I think string parsing should be a completely separate discussion from the design of the library
Sure. Perhaps I didn't explain my point properly. I meant focusing on a more simplistic approach (happening to deal with string parsing), like the one I am proposing, versus a more comprehensive one, like Toxatron's one. In summary: having to create x number of not-precisely-simple classes with a relevant number of dependencies or just one with a more complex internal structure.
Just because there is a large surface area in the linked library doesn't mean it's slow
No doubt on that (I did also highlight in one of my comments above that I am a strong defender of don't-think-that-code-size-is-so-important ideas). It is not the number of lines of code, but its complexity. As said, we are talking about a high number (50, 100, more?) of new classes versus just one new class. I am sure that there is a relevant performance difference between both approaches. For example: I do have the tendency to rely on a quite big number of classes when facing small, well-delimited developments; and I have confirmed the big impact on performance of such an attitude when dealing with situations where this is a relevant issue (e.g., involving large amounts of data). Additionally, here we are not talking about just a few additional classes, but about lots of them.
Your proposal has various values boxed as an Object by default. That won't be good for performance.
I came up with my proposal suddenly (I was writing something somewhere and thought about it) and wrote this sample code pretty quickly; nothing to do with the undoubtedly more thoughtful Toxantron's proposal. But even though the issue you highlight isn't that relevant, when we are talking about 1 vs. many. The 1 might even be inefficient and have a much smaller impact.
It has type-safe operations at compile-time.
My approach is certainly peculiar (again, I came up with it relatively quickly), but I do like it quite a lot. Type-safe operations are certainly important, but mainly reaching such a goal rather than the means used to do it. In this specific context, I do think that my suggestion represents a pretty good solution: it simplifies a lot the complexity of a so big implementation. You are right about "It doesn't need to perform extra logic at runtime to determine whether operations are actually valid", but at what price? You have to write a much bigger code. Implementing the rules to perform operations is extremely simple: both operands have the same type? Yes, go ahead. No, apply the conflict conventions (e.g., just use the type of the first operand); just a few lines of code, not a deal at all.
Adding a "second-level" improvement is only possible if it builds directly on top of the first part in a coherent way
My thought on this front was firstly having ready the all-in-one type which I am suggesting; after being proven useful, it might be possible to start creating in parallel one type per magnitude as in Toxantron's proposal. On one hand, you will have Magnitude
, performing all the parsing actions among all the different types; on the other hand, Length
, Weight
, etc., delivering a much more programmer-friendly performance.
@mellinoe I couldn't agree more.
@Toxantron 's proposal seems sort of similar to Gu-Units, but a bit different as well. In my opinion, though, Gu-Units' approach seems to be the best, perhaps because it is the most fleshed out, and has an actual implementation.
I am quite sure unlike me they spend a little more time than 15min drafting a concept and API while actually having to do something else. ;-)
Below these lines, you can find a corrected version of my first code where I am addressing some of the issues highlighted in the comments. That is:
enum
s and making it completely string-based. The main goal of this correction is empathising the simplistic-parsing-based vs. more-complex-programmer-friendly (i.e., first attempt vs. further improvements or mine vs. Toxantron's).object
variables because of not being really useful, much less after having removed all the enum
s.``` C#
public class Magnitude
{
private static string[] LengthUnits = new string[] { "m", "ft", "in" };
public static Magnitude operator +(Magnitude first, Magnitude second)
{
if (first.type == second.type)
{
if (first.system.name == second.system.name && first.system.unit == second.system.unit)
{
return new Magnitude(first.value + second.value, first.system.unit);
}
else
{
return new Magnitude(first.value + ConvertUnits(second, first.system).value, first.system.unit);
}
}
else return first;
}
public static Magnitude ConvertUnits(Magnitude magnitude, SystemUnits target)
{
//Perform the required modifications in magnitude.value to match target
return magnitude;
}
public readonly string type;
public SystemUnits system { get; set; }
public decimal value { get; set; }
public Magnitude(decimal value, string unit)
{
this.value = value;
unit = CorrectInputUnit(unit);
if (LengthUnits.Contains(unit)) this.type = "Length";
if (this.type != null) system = new SystemUnits(unit);
}
private string CorrectInputUnit(string unit)
{
//Fixing eventual mispellings
return unit;
}
}
public class SystemUnits
{
private static string[] ISUnits = new string[] { "m" };
private static string[] ImperialUnits = new string[] { "ft", "in" };
public string name { get; set; }
public string unit { get; set; }
public SystemUnits(string unit)
{
this.unit = unit;
GetSystemFromUnit();
}
private void GetSystemFromUnit()
{
if (ISUnits.Contains(unit)) name = "International System";
else if (ImperialUnits.Contains(unit)) name = "Imperial System";
}
}
```
Still a very simplistic and preliminary version, but hopefully more descriptive of the kind of approach which I think that will be better here.
What do you think @mellinoe ?
There's still a lot of problems with this sort of "stringly-typed" API, in my opinion. Here's a few off the top of my head:
I also wanted to comment on this:
IMO, this second version can be escalated still more easily and involves a still smaller code size. This issue isn't completely secondary on account of the big amount of cases which are expected to be accounted for.
I don't think code size is an important metric here. The top priority should be creating a useful, usable, clean API. Second to that, the performance of the library would also be more important than the code size. Code size is a much lower priority, and would almost never supersede a well-designed interface. Also, code size is impossible to determine in this case, since the library isn't well- or fully-defined. It seems likely that a string-based approach is going to fall apart more quickly under higher complexity. I still think a generated-code solution is appropriate here. Although it will be a lot more lines of code (compiled), the generation source will be much smaller. That has its own problems and challenges, but is still manageable, in my opinion.
I think this could be handled in much the way JSON is now, JSON.NET isn't part of the framework but it's pretty much the way to JSON work in .NET. I think there should be a "sanctioned" third party, existing framework. Maybe https://github.com/JohanLarsson/Gu.Units is that, maybe not.
@mellinoe Note that my main motivation since the very first moment was creating an ideal-to-me approach that's why I don't share all your concerns.
More specifically:
- It's not user friendly.
It can be as user-friendly as required. And my intention has always been making it extremely user-friendly. What I meant with mispellings was accounting for likely inputs. For example: with meters, allowing "m", "m.", "meter", "metre", "meters", etc. The structure is so simple that all the efforts might be focused on making it as adaptable as possible (i.e., accepting lots of inputs or different magnitudes).
- It's hard to refactor
This point is a bit less arguable. Harcoded-strings are undoubtedly more problematic on this front. On the other hand, with the right structure these problems might be highly minimised. Additionally, it isn't expected to be changed much after being created (+ my suggestion of adding the second level improvement of more-detailed types). This might be its weakest point, but it has many very strong-in-my-opinion ones.
- It's slow
I completely disagree with this one. It is much (much) faster than the equivalent tons-of-classes version. The addition is making a very simple check; this specific part might be a bit slower but is also secondary. The really big deal is the class which has to be instantiated every time (1 vs. many of them in Toxantron's alternative). I can create a bigger-but-still-preliminary version and extend Toxantron's code (unless he wants to do it himself), compare them and definetively settle this issue.
- This "SystemUnits"
No problem with this one either. As said, I was just trying to support my words with a clearer code, but this is very far away from being a definitive proposal. Since Toxantron came into picture, I have been waiting for your definitive feedback regarding which direction should we focus on.
Shall I then go ahead and write a more definitive version of the code to definetively settle the speed issues? I am currently a bit busy, but can get some time during this week and have something ready for next week.
I think that right now, the proposal is too broad to be actionable. Some of the points we are discussing are hard to agree upon if we don't have real implementations to compare. I still contend that this string-based, class-based approach will be slow because of unnecessary string-based type-checking and excessive GC allocations, but it's hard to prove that without any (usable) implementation to point to, or specific competitor to compare to. It may be worthwhile to spend some time creating a real, functional prototype of the library, and compare it to something like Gu.Units. This would help explore some of the usability and performance concerns, and could serve as the starting point for a full-fledged library (either inside or outside the BCL).
One option would be to submit such a (functional) prototype to our corefxlab repo, which houses other similar experimental libraries. For now, this may make the most sense. If you see that a lot of other folks are interested in the library, we can even get a NuGet package published into a private feed from that repository so that folks can consume and iterate on it without affecting the BCL itself.
That said, as I alluded to somewhere above in the thread, I am doubtful that we will ever pull something like this into the framework itself. It's just too "opinionated" of an area, and there's too many variables in the domain to tweak for us to be able to provide a "one-size-fits-all" library. Since there isn't a library with dominant popularity out there (unless I'm mistaken), it's hard for me to identify what works and what doesn't. That's not to say that there isn't a place for this kind of library in the ecosystem; on the contrary, it seems perfect to me as a separate component sitting on top of the BCL, as a redistributable NuGet package. I'm just not sure we will be able to make it fit in the BCL.
OK. I will go ahead and create a comprehensive enough first version for the proposed string-based approach. After finishing it, I will come back here and we will see how it might be included in the .NET Framework (if possible at all).
In any case and as said, I am currently a bit too busy and that's why cannot spend too much time on this. Although I will start right away and have it ready ASAP.
I have started to write the code for the aforementioned proper version of my proposal and am liking it pretty much. It will still take a while (not sure if I will finish it by this month), but my ideas are already very clear. That's why I am writing this warming-things-up post with a clear enough picture about how it is expected to look like. Any suggestion will be more than welcome.
The basic structure continues as originally proposed. Sample code:
``` C#
Magnitude var1 = new Magnitude(1m, "m");
Magnitude var2 = new Magnitude(3.280839895013123m, "ft");
Magnitude var3 = var1 + var2; //2 m
- Just one main type (`Magnitude`) dealing with all the different categories.
- The type-safety aspects will be indirectly managed via `readonly` properties (i.e., each instance can only deal with one type, like Length or Weight).
- The instances will be created via string-parsing. The input format will be very flexible, as explained below.
Now I will be describing some of the most relevant features on which I have been working (unfortunately, not too much; as said, too busy) during the last weeks.
CLEAN & SCALABLE CODE
This is perhaps the most problematic part of the proposed approach and that's why I have made a special effort to come up with a good enough solution. One-word summary: dictionaries. The .NET dictionaries can be a true performance nightmare for big sizes. On the other hand, their performance for small enough amounts of items (i.e., what will be happening here) is certainly remarkable; additionally, they provide the kind of very-clear/easily-modifiable multi-element instantiation which is relevant in this specific case.
Some code to help get this point better:
``` C#
public static Dictionary<Magnitude.Type, Dictionary<Magnitude.System, Dictionary<string, decimal>>> allMagnitudes = new Dictionary<Magnitude.Type, Dictionary<Magnitude.System, Dictionary<string, decimal>>>
{
{
Magnitude.Type.Length, new Dictionary<Magnitude.System, Dictionary<string, decimal>>()
{
{
Magnitude.System.International, new Dictionary<string, decimal>()
{
{ "m", 1.0m }
}
},
{
Magnitude.System.Imperial, new Dictionary<string, decimal>()
{
{ "thou", 0.0000254m }, { "in", 0.0254m }, { "hand", 0.1016m },
{ "ft", 0.3048m }, { "cubit", 0.4572m }, { "yd", 0.9144m },
{ "pace", 1.524m }, { "fathom", 1.8288m }, { "rod", 5.0292m },
{ "chain", 20.1168m }, { "furlong", 201.168m }, { "mi", 1609.344m },
{ "NM", 1852m }, { "league", 4828.032m }
}
},
}
},
{
Magnitude.Type.Mass, new Dictionary<Magnitude.System, Dictionary<string, decimal>>()
{
{
Magnitude.System.International, new Dictionary<string, decimal>()
{
{ "g", 1.0m }, { "t", 1000000.0m }
}
},
{
Magnitude.System.Imperial, new Dictionary<string, decimal>()
{
{ "gr", 0.06479891m }, { "dr", 1.7718451953125m }, { "oz", 28.349523125m },
{ "lb", 453.59237m }, { "st", 6350.29318m }, { "sl", 14593.903m },
{ "cwt", 50802.34544m }, { "cwt_us", 45359.237m }, { "tn", 1016046.9088m },
{ "tn_us", 907184.74m }
}
},
}
},
};
private static Dictionary<string, decimal> preInt = new Dictionary<string, decimal>()
{
{ "Y", 1000000000000000000000000m },
{ "Z", 1000000000000000000000m },
{ "E", 1000000000000000000m },
{ "P", 1000000000000000m },
{ "T", 1000000000000m },
{ "G", 1000000000m },
{ "M", 1000000m },
{ "k", 1000m },
{ "h", 100m },
{ "da",10m },
{ "d", 0.1m },
{ "c", 0.01m },
{ "m", 0.001m },
{ "μ", 0.000001m },
{ "n", 0.000000001m },
{ "p", 0.000000000001m },
{ "f", 0.000000000000001m },
{ "a", 0.000000000000000001m },
{ "z", 0.000000000000000000001m },
{ "y", 0.000000000000000000000001m },
};
These are small excerpts of the code I am working on. More specifically, these two dictionaries deal with all the unit-conversion related actions. preInt
is brought into picture while analysing IS units and will (logically) never change.
allMagnitudes
deal with all the units of all the types in all the system units. This second dictionary is expected to be changed; in fact, this is almost the only part expected to be modified in a more or less regular basis. It seems quite clear that the format of this dictionary is very extension-/user-friendly and non-error-prone. The only requirement is to add the main designation for the given unit (i.e., its valid abbreviation; note that there is another dictionary dealing with alternative names for each unit) and its relationship with respect to the corresponding base unit (i.e., m for length and g for mass). Although this isn't enough for all the possible situations, it does address most of the eventualities. Additionally, it delivers an intuitive and easy way to account for the Imperial/USCS differences (e.g., "cwt" and "cwt_us"; logically, this is only used internally by the algorithm; from the user perspective, there will be two fully-differentiated systems of units).
ERROR MANAGEMENT
Other criticisable part of my approach is the way in which errors should be managed. I am still working on this part, but my ideas are clear and will follow these guidelines:
Magnitude
type will include a public (readonly
) flag indicating whether the given instance was properly created or not.Parse
/TryParse
duality, but fully managed via (optional) argument at instantiation.VERY INTUITIVE & USER-FRIENDLY
This is the strongest point of my proposal and that's why I have paid (and will continue paying) special attention to these aspects. In fact, I have extended my original plans on this front even further; particularly interesting is the abilitiy to create new (compound) magnitudes by performing operations (e.g., velocity when dividing length by time).
To get a good enough idea about the expected capabilities of this approach, below these lines you can see some of the ways in which a 1x1 m^2 area might be created:
``` C#
List
areas.Add(new Magnitude(1m, "m2"));
areas.Add(new Magnitude(1m, "m 2"));
areas.Add(new Magnitude(1m, "m^2"));
areas.Add(new Magnitude(1m, "m *m"));
areas.Add(new Magnitude(1m, "m·m"));
areas.Add(new Magnitude(1m, "m× m"));
areas.Add(new Magnitude(1m, "square m"));
areas.Add(new Magnitude(1m, "m sq"));
areas.Add(new Magnitude(100m, "dm2"));
areas.Add(new Magnitude(0.0001m, "ha"));
areas.Add(new Magnitude(10.76391041670972m, "ft2"));
areas.Add(new Magnitude(0.5m, "m") * new Magnitude(2m, "m"));
areas.Add(new Magnitude(1m, "m") * new Magnitude(3.280839895013123m, "ft"));
Magnitude total = new Magnitude(0m, "m2");
foreach (Magnitude area in areas)
{
total = total + area;
}
//total equals areas.Count m^2
```
I had a much more general UnitBase concept which I also coupled with an experiential Number class which resolved issues wit conversions and otherwise.
I defined quite a few things including Frequency et al and you even had fun properties like IsVisible on the Wavelength class.
The unit derivations could be interchanged automatically and the mass energy equivalence equation worked also.
Complete Managed Media Aggregation Part III : Quantum Computing in C# ...
I will be brining the Number class and the UnitBase class over shortly as well as a revised version of Bitable which doesn't use bool if anyone cares.
using System;
using System.Collections.Generic;
using System.Linq;
/*
Copyright (c) 2013 [email protected]
SR. Software Engineer ASTI Transportation Inc.
Permission is hereby granted, free of charge,
* to any person obtaining a copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction,
* including without limitation the rights to :
* use,
* copy,
* modify,
* merge,
* publish,
* distribute,
* sublicense,
* and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
*
*
* [email protected] should be contacted for further details.
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
*
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
* TORT OR OTHERWISE,
* ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
* v//
*/
///All types besides UnitBase could eventually be struct
namespace Media.Common
{
public interface IUnit
{
IEnumerable<string> Symbols { get; } //Used with formatting
/// <summary>
/// A Number which represents a value from which a scalar value can be calculated from the TotalUnits member
/// </summary>
Number Constant { get; }
/// <summary>
/// A Number which represents the total amount of integral units of this instance
/// </summary>
Number TotalUnits { get; }
}
public abstract class UnitBase : IUnit, IFormattable
{
public static readonly System.Globalization.RegionInfo CurrentRegion = System.Globalization.RegionInfo.CurrentRegion;
public static bool IsMetricSystem
{
get { return CurrentRegion.IsMetric; }
}
abstract protected List<string> m_Symbols { get; }
/// <summary>
/// Defines the number used to scale other distances to this number.
/// </summary>
public Number Constant { get; internal protected set; }
public IEnumerable<string> Symbols { get { return m_Symbols.AsReadOnly(); } }
public Number Units { get; protected set; }
IEnumerable<string> IUnit.Symbols
{
get { return Symbols; }
}
Number IUnit.Constant
{
get { return Constant; }
}
public Number TotalUnits
{
get
{
//More Flexible
//return Constant.ToDouble() > 1D ? Units.ToDouble() * Constant.ToDouble() : Units.ToDouble() / Constant.ToDouble();
return new Number(Units.ToDouble() * Constant.ToDouble());
}
}
/// <summary>
/// Constructs a new UnitBase with the given constant
/// </summary>
/// <param name="constant">The constant which when multiplied by the Units property represents a quantity</param>
public UnitBase(Number constant)
{
Constant = constant;
}
/// <summary>
/// Constructs a new UnitBase from another.
/// If the Constants of the two Units are the same the Units property is assigned, otherwise the Units is obtained by division of the other UnitBase's Units by this instances Constant.
/// </summary>
/// <param name="constant">The constant which when multiplied by the Units property represents a quantity</param>
/// <param name="other">Another Unit base</param>
public UnitBase(Number constant, UnitBase other)
: this(constant)
{
if (other.Constant != Constant)
Units = Constant.ToDouble() / other.Units.ToDouble();
else
Units = other.Units;
}
public virtual string ToString(string join = " ")
{
return Units.ToString() + join + m_Symbols.FirstOrDefault() ?? string.Empty;
}
public override string ToString()
{
return ToString(null);
}
string IFormattable.ToString(string format, IFormatProvider formatProvider)
{
return string.Format(formatProvider, format, ToString());
}
}
public static class Distances
{
public interface IDistance : IUnit
{
Number TotalMeters { get; }
}
public class Distance : UnitBase, IDistance
{
public static readonly double PlankLengthsPerMeter = 6.1873559 * Math.Pow(10, 34);
public static readonly double MilsPerMeter = 2.54 * Math.Pow(10, -5);
public const double InchesPerMeter = 0.0254;
public const double FeetPerMeter = 0.3048;
public const double YardsPerMeter = 0.9144;
public const double MilesPerMeter = 1609.344;
public static readonly double AttometersPerMeter = Math.Pow(10, 18);
//1 yoctometer = 0,001 zeptometer
//1 attometer = 1000 zeptometer
//1 000 yoctometer
//0,001 attometer
//10−21 meter
public static readonly double ZeptometersPerMeter = Math.Pow(10, -21);
public static readonly double YoctometersPerMeter = Math.Pow(10, -24);
public const double NanometersPerMeter = 1000000000;
public const double MicronsPerMeter = 1000000;
public const double MillimetersPerMeter = 1000;
public const double CentimetersPerMeter = 100;
public const double DecimetersPerMeter = 10;
public const double M = 1;
public const double KilometersPerMeter = 0.001;
/// <summary>
/// The minimum distance in Meters = The Plank Length
/// </summary>
public static readonly Distance MinValue = Physics.ℓP;
public static readonly Distance PositiveInfinity = new Distance(Number.PositiveInfinty);
public static readonly Distance NegitiveInfinity = new Distance(Number.NegitiveInfinity);
public static readonly Distance Zero = new Distance(Number.ComplexZero);
static List<string> DistanceSymbols = new List<string>()
{
"ℓP",
"mil",
"in",
"ft",
"yd",
"mi",
"n",
"µ",
"mm",
"cm",
"m",
"km"
};
public Distance() : base(M)
{
Constant = MinValue.Constant;
Units = MinValue.Units;
}
public Distance(Number meters)
: base(M)
{
Units = meters;
}
public Distance(Distance other) : base(M, other) { }
public Distance(Number value, Distance other) : base(M, other) { Units = value; }
protected override List<string> m_Symbols
{
get
{
return DistanceSymbols;
}
}
public virtual Number TotalMeters
{
get { return Units; }
}
public virtual Number TotalInches
{
get { return TotalMeters / InchesPerMeter; }
}
public virtual Number TotalFeet
{
get { return TotalMeters / FeetPerMeter; }
}
public virtual Number TotalYards
{
get { return TotalMeters / YardsPerMeter; }
}
public virtual Number TotalKilometers
{
get { return TotalMeters / KilometersPerMeter; }
}
public static Distance FromInches(Number value)
{
return new Distance(value.ToDouble() * InchesPerMeter);
}
public static Distance FromFeet(Number value)
{
return new Distance(value.ToDouble() * FeetPerMeter);
}
public static Distance FromYards(Number value)
{
return new Distance(value.ToDouble() * YardsPerMeter);
}
public static Distance operator +(Distance a, int amount)
{
return new Distance(a.Units.ToDouble() + amount);
}
public static Distance operator -(Distance a, int amount)
{
return new Distance(a.Units.ToDouble() - amount);
}
public static Distance operator *(Distance a, int amount)
{
return new Distance(a.Units.ToDouble() * amount);
}
public static Distance operator /(Distance a, int amount)
{
return new Distance(a.Units.ToDouble() / amount);
}
public static Distance operator %(Distance a, int amount)
{
return new Distance(a.Units.ToDouble() % amount);
}
public static bool operator >(Distance a, IDistance b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant > b.TotalMeters;
return a.Units > b.TotalMeters;
}
public static bool operator <(Distance a, IDistance b)
{
return !(a > b);
}
public static bool operator ==(Distance a, IDistance b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant == b.TotalMeters;
return a.Units == b.TotalMeters;
}
public static bool operator !=(Distance a, IDistance b)
{
return !(a == b);
}
public override bool Equals(object obj)
{
if (obj is IDistance) return obj as IDistance == this;
return base.Equals(obj);
}
public override int GetHashCode()
{
return Constant.GetHashCode() << 16 | Units.GetHashCode() >> 16;
}
}
}
//Angles?
public static class Frequencies
{
//// public enum FrequencyKind
//// {
//// Local,
//// Universal
//// }
//// public static class Clock
//// {
//// }
public interface IFrequency
{
Number TotalMegahertz { get; }
}
//http://en.wikipedia.org/wiki/Frequency
/* Frequencies not expressed in hertz:
*
* Even higher frequencies are believed to occur naturally,
* in the frequencies of the quantum-mechanical wave functions of high-energy
* (or, equivalently, massive) particles, although these are not directly observable,
* and must be inferred from their interactions with other phenomena.
* For practical reasons, these are typically not expressed in hertz,
* but in terms of the equivalent quantum energy, which is proportional to the frequency by the factor of Planck's constant.
*/
public class Frequency : UnitBase, IFrequency
{
public static implicit operator double(Frequency t) { return t.Units.ToDouble(); }
public static implicit operator Frequency(double t) { return new Frequency(t); }
public static readonly Frequency Zero = new Frequency(Number.ComplexZero);
public static readonly Frequency One = new Frequency(new Number(Hz)); //Hz
public const double Hz = 1;
public const double KHz = 1000D;
public const double MHz = 1000000D;
public const double GHz = 1000000000D;
public const double THz = 1000000000000D;
//http://en.wikipedia.org/wiki/Visible_spectrum - Audible?
public static bool IsVisible(Frequency f, double min = 430, double max = 790)
{
double F = f.Terahertz.ToDouble();
return F >= min && F <= max;
}
static List<string> FrequencySymbols = new List<string>()
{
"Hz",
"KHz",
"MHz",
"GHz",
"THz"
};
public Frequency()
: base(Hz)
{
//Constant = MinValue.Constant;
//Units = MinValue.Units;
}
public Frequency(double MHz)
: base(Hz)
{
Units = MHz;
}
public Frequency(Frequency other) : base(Hz, other) { }
public Frequency(Number value, Frequency other) : base(Hz, other) { Units = value; }
protected override List<string> m_Symbols
{
get
{
return FrequencySymbols;
}
}
public TimeSpan Period
{
get
{
return TimeSpan.FromSeconds(TotalHertz);
}
}
public virtual Number TotalHertz
{
get { return Units; }
}
public virtual Number TotalKilohertz
{
get { return TotalHertz * KHz; }
}
public virtual Number TotalMegahertz
{
get { return TotalHertz * MHz; }
}
public virtual Number TotalGigahertz
{
get { return TotalHertz * GHz; }
}
public virtual Number Terahertz
{
get { return TotalHertz * THz; }
}
public static Frequency FromKilohertz(Number value)
{
return new Frequency(value.ToDouble() * KHz);
}
public static Frequency FromMegahertz(Number value)
{
return new Frequency(value.ToDouble() * MHz);
}
public static Frequency FromGigahertz(Number value)
{
return new Frequency(value.ToDouble() * GHz);
}
public static Frequency FromTerahertz(Number value)
{
return new Frequency(value.ToDouble() * THz);
}
public static Frequency operator +(Frequency a, int amount)
{
return new Frequency(a.Units.ToDouble() + amount);
}
public static Frequency operator -(Frequency a, int amount)
{
return new Frequency(a.Units.ToDouble() - amount);
}
public static Frequency operator *(Frequency a, int amount)
{
return new Frequency(a.Units.ToDouble() * amount);
}
public static Frequency operator /(Frequency a, int amount)
{
return new Frequency(a.Units.ToDouble() / amount);
}
public static Frequency operator %(Frequency a, int amount)
{
return new Frequency(a.Units.ToDouble() % amount);
}
public static bool operator >(Frequency a, Frequency b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant > b.TotalUnits;
return a.Units > b.TotalUnits;
}
public static bool operator <(Frequency a, Frequency b)
{
return !(a > b);
}
public static bool operator ==(Frequency a, Frequency b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant == b.TotalUnits;
return a.Units == b.TotalUnits;
}
public static bool operator !=(Frequency a, Frequency b)
{
return !(a == b);
}
public override bool Equals(object obj)
{
if (obj is Frequency) return true;
return base.Equals(obj);
}
public override int GetHashCode()
{
return Constant.GetHashCode() << 16 | Units.GetHashCode() >> 16;
}
//Add methods for conversion to time
//http://www.hellspark.com/dm/ebench/tools/Analog_Oscilliscope/tutorials/scope_notes_from_irc.html
}
//// public struct Date
//// {
//// pulic DateTime ToDateTime(Frequency? time = null);
//// }
}
public static class Temperatures
{
public interface ITemperature : IUnit
{
Number TotalCelcius { get; }
}
public class Temperature : UnitBase, ITemperature
{
public static implicit operator double(Temperature t) { return t.Units.ToDouble(); }
public static implicit operator Temperature(double t) { return new Temperature(t); }
public static readonly Temperature MinValue = 0D;
public static readonly Temperature One = 1D; //Celcius
const double FahrenheitMultiplier = 1.8;
public const double Fahrenheit = 32D;
public const double Kelvin = 273.15D;
public const char Degrees = '°';
static List<string> TempratureSymbols = new List<string>()
{
"C",
"F",
"K",
};
public Temperature()
: base(One.Units)
{
//Constant = MinValue.Constant;
//Units = MinValue.Units;
}
public Temperature(double celcius)
: base(One.Units)
{
Units = celcius;
}
public Temperature(Temperature other) : base(One.Units, other) { }
public Temperature(Number value, Temperature other) : base(One.Units, other) { Units = value; }
protected override List<string> m_Symbols
{
get
{
return TempratureSymbols;
}
}
public virtual Number TotalCelcius
{
get { return Units; }
}
public virtual Number TotalKelvin
{
get { return TotalCelcius + Kelvin; }
}
public virtual Number TotalFahrenheit
{
get { return TotalCelcius * FahrenheitMultiplier + Fahrenheit; }
}
public static Temperature FromFahrenheit(Number value)
{
return new Temperature(value.ToDouble() * Fahrenheit);
}
public static Temperature FromKelvin(Number value)
{
return new Temperature(value.ToDouble() - Kelvin);
}
public static Temperature operator +(Temperature a, int amount)
{
return new Temperature(a.Units.ToDouble() + amount);
}
public static Temperature operator -(Temperature a, int amount)
{
return new Temperature(a.Units.ToDouble() - amount);
}
public static Temperature operator *(Temperature a, int amount)
{
return new Temperature(a.Units.ToDouble() * amount);
}
public static Temperature operator /(Temperature a, int amount)
{
return new Temperature(a.Units.ToDouble() / amount);
}
public static Temperature operator %(Temperature a, int amount)
{
return new Temperature(a.Units.ToDouble() % amount);
}
public static bool operator >(Temperature a, ITemperature b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant > b.TotalUnits;
return a.Units > b.TotalUnits;
}
public static bool operator <(Temperature a, ITemperature b)
{
return !(a > b);
}
public static bool operator ==(Temperature a, ITemperature b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant == b.TotalUnits;
return a.Units == b.TotalUnits;
}
public static bool operator !=(Temperature a, ITemperature b)
{
return !(a == b);
}
public override bool Equals(object obj)
{
if (obj is ITemperature) return true;
return base.Equals(obj);
}
public override int GetHashCode()
{
return Constant.GetHashCode() << 16 | Units.GetHashCode() >> 16;
}
public override string ToString()
{
return ToString(" " + Degrees);
}
}
}
public static class Masses
{
public interface IMass : IUnit
{
Number TotalKilograms { get; }
}
public class Mass : UnitBase, IMass
{
public const double AtomicMassesPerKilogram = 6.022136652e+26;
public const double OuncesPerKilogram = 35.274;
public const double PoundsPerKilogram = 2.20462;
public const double Kg = 1;
public const double GramsPerKilogram = 1000;
static List<string> MassSymbols = new List<string>()
{
"u",
"o",
"lb",
"kg",
"g",
};
public Mass()
: base(Kg)
{
//Constant = MinValue.Constant;
//Units = MinValue.Units;
}
public Mass(Number kiloGrams)
: base(Kg)
{
Units = kiloGrams;
}
public Mass(Mass other) : base(Kg, other) { }
public Mass(Number value, Mass other) : base(Kg, other) { Units = value; }
protected override List<string> m_Symbols
{
get
{
return MassSymbols;
}
}
public virtual Number TotalKilograms
{
get { return Units; }
}
public virtual Number TotalAtomicMasses
{
get { return TotalKilograms * AtomicMassesPerKilogram; }
}
public virtual Number TotalGrams
{
get { return TotalKilograms * GramsPerKilogram; }
}
public virtual Number TotalOunces
{
get { return TotalKilograms * OuncesPerKilogram; }
}
public virtual Number TotalPounds
{
get { return TotalKilograms * PoundsPerKilogram; }
}
public static Mass FromGrams(Number value)
{
return new Mass(value.ToDouble() * GramsPerKilogram);
}
public static Mass FromPounds(Number value)
{
return new Mass(value.ToDouble() * PoundsPerKilogram);
}
public static Mass FromOunces(Number value)
{
return new Mass(value.ToDouble() * OuncesPerKilogram);
}
public static Mass FromAtomicMasses(Number value)
{
return new Mass(value.ToDouble() * AtomicMassesPerKilogram);
}
public static Mass operator +(Mass a, int amount)
{
return new Mass(a.Units.ToDouble() + amount);
}
public static Mass operator -(Mass a, int amount)
{
return new Mass(a.Units.ToDouble() - amount);
}
public static Mass operator *(Mass a, int amount)
{
return new Mass(a.Units.ToDouble() * amount);
}
public static Mass operator /(Mass a, int amount)
{
return new Mass(a.Units.ToDouble() / amount);
}
public static Mass operator %(Mass a, int amount)
{
return new Mass(a.Units.ToDouble() % amount);
}
public static bool operator >(Mass a, IMass b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant > b.TotalUnits;
return a.Units > b.TotalUnits;
}
public static bool operator <(Mass a, IMass b)
{
return !(a > b);
}
public static bool operator ==(Mass a, IMass b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant == b.TotalUnits;
return a.Units == b.TotalUnits;
}
public static bool operator !=(Mass a, IMass b)
{
return !(a == b);
}
public override bool Equals(object obj)
{
if (obj is IMass) return true;
return base.Equals(obj);
}
public override int GetHashCode()
{
return Constant.GetHashCode() << 16 | Units.GetHashCode() >> 16;
}
}
}
public static class Energies
{
public interface IEnergy : IUnit
{
Number TotalJoules { get; }
}
public class Energy : UnitBase, IEnergy
{
public static implicit operator double(Energy t) { return t.Units.ToDouble(); }
public static implicit operator Energy(double t) { return new Energy(t); }
public static readonly Energy MinValue = 0D;
public static readonly Energy One = Joule;
public static readonly Energy Zero = 0D;
public const double ITUCaloriesPerJoule = 0.23884589663;
public const double BtusPerJoule = 0.00094781707775;
public const double ThermochemicalBtusPerJoule = 0.00094845138281;
public const double DekajoulesPerJoule = 0.1;
public const double Joule = 1;
public const double ExajoulesPerJoule = 1.0e-18;
public const double TerajoulesPerJoule = 1.0e-12;
public const double DecijoulesPerJoule = 10;
public const double CentijoulesPerJoule = 100;
public const double TeraelectronvoltsPerJoule = 6241506.48;
public const double FemtojoulesPerJoule = 1000000000000000;
public const double AuttojoulePerJoule = 1000000000000000000;
static List<string> EnergySymbols = new List<string>()
{
"J",
//"Btu",
};
public Energy(double joules)
: this(new Number(joules))
{
}
public Energy()
: base(Joule) { }
public Energy(Energy other) : base(Joule, other) { }
public Energy(Number joules)
: base(Joule)
{
Units = joules;
}
public Energy(Masses.IMass m) :
this(Math.Pow(m.TotalKilograms.ToDouble() * Velocities.Velocity.MaxValue.TotalMetersPerSecond.ToDouble(), 2))
{
}
protected override List<string> m_Symbols
{
get
{
return EnergySymbols;
}
}
public virtual Number TotalJoules
{
get { return Units; }
}
public virtual Number Decijoules
{
get { return TotalJoules / DecijoulesPerJoule; }
}
public virtual Number Dekajoules
{
get { return TotalJoules / DekajoulesPerJoule; }
}
public virtual Number TotalITUCalories
{
get { return TotalJoules / ITUCaloriesPerJoule; }
}
public static Energy FromITUCaloriesPerJoule(Number value)
{
return new Energy(value.ToDouble() * ITUCaloriesPerJoule);
}
public static Energy FromDekajoules(Number value)
{
return new Energy(value.ToDouble() * DekajoulesPerJoule);
}
public static Energy operator +(Energy a, int amount)
{
return new Energy(a.Units.ToDouble() + amount);
}
public static Energy operator -(Energy a, int amount)
{
return new Energy(a.Units.ToDouble() - amount);
}
public static Energy operator *(Energy a, int amount)
{
return new Energy(a.Units.ToDouble() * amount);
}
public static Energy operator /(Energy a, int amount)
{
return new Energy(a.Units.ToDouble() / amount);
}
public static Energy operator %(Energy a, int amount)
{
return new Energy(a.Units.ToDouble() % amount);
}
public static bool operator >(Energy a, IEnergy b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant > b.TotalUnits;
return a.Units > b.TotalUnits;
}
public static bool operator <(Energy a, IEnergy b)
{
return !(a > b);
}
public static bool operator ==(Energy a, IEnergy b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant == b.TotalUnits;
return a.Units == b.TotalUnits;
}
public static bool operator !=(Energy a, IEnergy b)
{
return !(a == b);
}
public override bool Equals(object obj)
{
if (obj is IEnergy) return true;
return base.Equals(obj);
}
public override int GetHashCode()
{
return Constant.GetHashCode() << 16 | Units.GetHashCode() >> 16;
}
}
}
public static class Velocities
{
public interface IVelocity : IUnit
{
Number TotalMetersPerSecond { get; }
}
public class Velocity : UnitBase, IVelocity
{
public const double FeetPerSecond = 3.28084;
public const double MilesPerHour = 2.23694;
public const double KilometersPerHour = 3.6;
public const double Knots = 1.94384;
public const double MetersPerSecond = 1;
public static readonly Velocity MaxValue = new Velocity(Physics.c);//the speed of light = 299 792 458 meters per second
static List<string> VelocitySymbols = new List<string>()
{
"mph",
"fps",
"kph",
"mps",
};
public Velocity()
: base(MetersPerSecond) { }
public Velocity(Number metersPerSecond)
: base(MetersPerSecond)
{
Units = metersPerSecond;
}
public Velocity(Velocity other) : base(MetersPerSecond, other) { }
public Velocity(Number value, Velocity other) : base(MetersPerSecond, other) { Units = value; }
protected override List<string> m_Symbols
{
get
{
return VelocitySymbols;
}
}
public virtual Number TotalMetersPerSecond
{
get { return Units; }
}
public virtual Number TotalMilesPerHour
{
get { return TotalMetersPerSecond * MilesPerHour; }
}
public virtual Number TotalFeetPerSecond
{
get { return TotalMetersPerSecond * FeetPerSecond; }
}
public virtual Number TotalKilometersPerHour
{
get { return TotalMetersPerSecond * KilometersPerHour; }
}
public static Velocity FromKnots(Number value)
{
return new Velocity(value.ToDouble() * Knots);
}
public static Velocity operator +(Velocity a, int amount)
{
return new Velocity(a.Units.ToDouble() + amount);
}
public static Velocity operator -(Velocity a, int amount)
{
return new Velocity(a.Units.ToDouble() - amount);
}
public static Velocity operator *(Velocity a, int amount)
{
return new Velocity(a.Units.ToDouble() * amount);
}
public static Velocity operator /(Velocity a, int amount)
{
return new Velocity(a.Units.ToDouble() / amount);
}
public static Velocity operator %(Velocity a, int amount)
{
return new Velocity(a.Units.ToDouble() % amount);
}
public static bool operator >(Velocity a, IVelocity b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant > b.TotalUnits;
return a.Units > b.TotalUnits;
}
public static bool operator <(Velocity a, IVelocity b)
{
return !(a > b);
}
public static bool operator ==(Velocity a, IVelocity b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant == b.TotalUnits;
return a.Units == b.TotalUnits;
}
public static bool operator !=(Velocity a, IVelocity b)
{
return !(a == b);
}
public override bool Equals(object obj)
{
if (obj is IVelocity) return true;
return base.Equals(obj);
}
public override int GetHashCode()
{
return Constant.GetHashCode() << 16 | Units.GetHashCode() >> 16;
}
}
}
public static class Forces
{
public interface IForce : IUnit
{
Number TotalNewtons { get; }
}
/*
newton is the unit for force
joules is the unit for work done
by definition, work done = force X distance
so multiply newton by metre to get joules
1 newton = 1 joule/meter
*/
public class Force : UnitBase, IForce
{
public static Energies.Energy ToEnergy(Distances.IDistance d)
{
return new Energies.Energy(d.TotalMeters.ToDouble());
}
public static implicit operator double(Force t) { return t.Units.ToDouble(); }
public static implicit operator Force(double t) { return new Force(t); }
public const double Newton = 1D;
static List<string> ForceSymbols = new List<string>()
{
"N"
};
public Force()
: base(Newton)
{
}
public Force(double celcius)
: base(Newton)
{
Units = celcius;
}
public Force(Force other) : base(Newton, other) { }
public Force(Number value, Force other) : base(Newton, other) { Units = value; }
protected override List<string> m_Symbols
{
get
{
return ForceSymbols;
}
}
public virtual Number TotalNewtons
{
get { return Units; }
}
public static Force operator +(Force a, int amount)
{
return new Force(a.Units.ToDouble() + amount);
}
public static Force operator -(Force a, int amount)
{
return new Force(a.Units.ToDouble() - amount);
}
public static Force operator *(Force a, int amount)
{
return new Force(a.Units.ToDouble() * amount);
}
public static Force operator /(Force a, int amount)
{
return new Force(a.Units.ToDouble() / amount);
}
public static Force operator %(Force a, int amount)
{
return new Force(a.Units.ToDouble() % amount);
}
public static bool operator >(Force a, IForce b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant > b.TotalUnits;
return a.Units > b.TotalUnits;
}
public static bool operator <(Force a, IForce b)
{
return !(a > b);
}
public static bool operator ==(Force a, IForce b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant == b.TotalUnits;
return a.Units == b.TotalUnits;
}
public static bool operator !=(Force a, IForce b)
{
return !(a == b);
}
public override bool Equals(object obj)
{
if (obj is IForce) return true;
return base.Equals(obj);
}
public override int GetHashCode()
{
return Constant.GetHashCode() << 16 | Units.GetHashCode() >> 16;
}
}
}
public static class Wavelengths
{
public interface IWavelength : IUnit
{
Distances.IDistance TotalMeters { get; }
Frequencies.IFrequency TotalHz { get; }
Energies.IEnergy TotalJoules { get; }
Velocities.IVelocity TotalVelocity { get; }
}
/*
newton is the unit for Wavelength
joules is the unit for work done
by definition, work done = Wavelength X distance
so multiply newton by metre to get joules
1 newton = 1 joule/meter
*/
public class Wavelength : UnitBase, IWavelength
{
public static implicit operator double(Wavelength t) { return t.Units.ToDouble(); }
public static implicit operator Wavelength(double t) { return new Wavelength(t); }
static List<string> WavelengthSymbols = new List<string>()
{
"nm",
"μm",
"m"
};
public const double Nm = 1D;
public Wavelength()
: base(Nm)
{
}
public Wavelength(Distances.Distance meters)
: base(Nm)
{
Units = meters.TotalMeters * Distances.Distance.NanometersPerMeter;
}
public Wavelength(double nanometers)
: base(Nm)
{
Units = nanometers;
}
public Wavelength(Frequencies.Frequency hZ)
: base(Nm)
{
Units = Velocities.Velocity.MaxValue.Units.ToComplex() * hZ.TotalHertz.ToComplex();
}
public Wavelength(Wavelength other) : base(Nm, other) { }
public Wavelength(Number value, Wavelength other) : base(Nm, other) { Units = value; }
protected override List<string> m_Symbols
{
get
{
return WavelengthSymbols;
}
}
public virtual Distances.IDistance TotalMeters
{
get { return new Distances.Distance(Units.ToComplex() * Distances.Distance.NanometersPerMeter); }
}
public virtual Velocities.IVelocity TotalVelocity
{
get { return new Velocities.Velocity(Velocities.Velocity.MaxValue.Units.ToDouble() / Units.ToDouble()); }
}
public virtual Frequencies.IFrequency TotalHz
{
get { return new Frequencies.Frequency(TotalVelocity.TotalMetersPerSecond.ToDouble() * TotalMeters.TotalUnits.ToDouble()); }
}
public virtual Energies.IEnergy TotalJoules
{
get { return new Energies.Energy(new Number(Physics.hc / TotalMeters.TotalUnits.ToDouble())); }
}
public static Wavelength operator +(Wavelength a, int amount)
{
return new Wavelength(a.Units.ToDouble() + amount);
}
public static Wavelength operator -(Wavelength a, int amount)
{
return new Wavelength(a.Units.ToDouble() - amount);
}
public static Wavelength operator *(Wavelength a, int amount)
{
return new Wavelength(a.Units.ToDouble() * amount);
}
public static Wavelength operator /(Wavelength a, int amount)
{
return new Wavelength(a.Units.ToDouble() / amount);
}
public static Wavelength operator %(Wavelength a, int amount)
{
return new Wavelength(a.Units.ToDouble() % amount);
}
public static bool operator >(Wavelength a, IWavelength b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant > b.TotalUnits;
return a.Units > b.TotalUnits;
}
public static bool operator <(Wavelength a, IWavelength b)
{
return !(a > b);
}
public static bool operator ==(Wavelength a, IWavelength b)
{
if (a.Constant != b.Constant)
return a.Units * b.Constant == b.TotalUnits;
return a.Units == b.TotalUnits;
}
public static bool operator !=(Wavelength a, IWavelength b)
{
return !(a == b);
}
public override bool Equals(object obj)
{
if (obj is IWavelength) return true;
return base.Equals(obj);
}
public override int GetHashCode()
{
return Constant.GetHashCode() << 16 | Units.GetHashCode() >> 16;
}
}
}
//Current -> //http://en.wikipedia.org/wiki/Coulomb
}
While further working on this library (still not too much, still too many things going on), I have got even more ideas to extend it. Finally, I have decided to build a wider parsing library taking care of more things than just magnitudes (e.g., equations). It is expected to be a very scalable/modular/comprehensive framework relying on the ideas shown in my last posts (i.e., very programmer-friendly, safe and adaptable parsing approaches accounting for a high number of different alternatives).
Thus, the upcoming magnitudes library will be just one part of a more comprehensive whole. It will be fully independent though (like all the other parts). Other than adding some basic adaptable-enough common structure, I will be mostly focusing on the magnitudes part. That is: this change will not affect the (pretty unprecise, sorry about that) original timeframe. In any case, I think that these are worthy-to-be-shared ideas about this development and what it might imply.
Hey guys. I stumbled upon this thread, while looking at System.Numerics. I was looking at Vector, so I could use it in a library, that deals with measurement units.
I think that I solved some of the problems discussed here. It should be reasonably fast, although I haven't done any benchmarks yet. It also supports derived units.
https://github.com/milutinovici/metric
There is also a branch with initial vector based implementation.
@milutinovici After a very quick look, it seems interesting. Apparently, it follows the ideas I proposed, as opposed to other much "heavier" alternatives.
FYI, I am working on a really detailed version. In fact, I have taken this as an excuse to start a quite big (although formed by many inter-independent parts) parsing library which will take care of virtually anything. That's why I preferred to spend a relevant amount of time and effort already in the first version of the units part.
In summary: by the end of July, I am planning to have ready a well-documented/structured/comprehensive version of the proposed very adaptable and intuitive unit parsing approach. I will better wait until then before taking a deeper look at your code.
@varocarbas I'm glad you find it interesting. Although I have to say that I agree with @mellinoe, stringly typed approach seems like bad idea. Of course there should be parsing capability, but not at the core of the implementation. I mean throwing exceptions in constructor is just yucky.
@milutinovici
throwing exceptions in constructor is just yucky.
My suggestion is emulating the differences anything.Parse
/TryParse
; that is: triggering an exception or just letting a flag communicate the error. Why do you see it wrong here and not wrong there?
For example: in the code I am currently writing there is a public readonly
flag called Error
, which tells whether an error happened or not (the exception might be triggered or not, as instructed). What is the problem with this implementation?
@milutinovici Let me clarify my ideas (others' and what seems yours) because perhaps some parts are a bit confusing:
public
flag.string
variables. In fact, I am relying on enum
s a lot. There will be different constructors and some of them will take enum
s as inputs (although the process will be much messier; as said, my approach is expected to account for many things).Hopefully, everything is completely clear now.
@varocarbas Try and find a class in corefx that throws in constructor. Using a constructor should always create an object in a valid state. And that is impossible to guarantee if you are parsing strings. I don't want to check any flags when constructing objects. Didn't do it thus far, and don't want to start now.
As for single/multiple types, that's a different issue. Ideally multiple types would be better, if c# had a more advanced type system, but it doesn't. Downside of single type is that there could be some runtime exceptions (when adding, subtracting, or comparing).
@milutinovici
Try and find a class in corefx that throws in constructor.
If this is a so big problem, I can change my approach such that the parsing occurs in a method (like in decimal.Parse("decimal")
). Rather than new UnitP("unit")
, I can do UnitP.Parse("unit")
; a pretty simple modification (not sure why this is better though).
@milutinovici , More advanced type in c#
system eh? In what way more advanced...?
Furthermore Constructors may not run due to Out of Memory or otherwise, how would you handle that without checking or a try / catch block?
When having a UnitType as the base the design implies that all units can be converted to other units, some units are not linearly converted into other units via a power or otherwise...
@varocarbas, et al.
In such situations e.g. how are you going to convert the unit..
Unit has no understanding of how to be converted only what VALUE it stored...
Furthermore if you can't convert a unit to another type of unit without additional logic then what purpose do either of these designs serve?
@varocarbas That would be better
@milutinovici Yes, I know that you think in this way. My question is why (I don't see the improvement).
Anyway. I will come back once my version will be ready.
@varocarbas parsing should be secondary concern. It shouldn't be a primary way to construct objects. It's just too fragile. You get no help from your IDE, no intellisense, typos are easily made.
@milutinovici The instance is created anyway. It would be the equivalent to instantiating a null variable or assigning a number to a numerical variable outside the min/max bounds (although in this case an exception would be triggered! -> the flag would be better here IMO).
In any case, I understand the overall .NET structure and how important is respecting its consistency; this is a secondary modification, which I wouldn't mind to perform. On the other hand, I will most likely keep the constructor version in my library (because it is mine and I do with it what I want :) kidding; because it is a parsing library, where parsing is the default behaviour and this should be fine).
As said, all clear and will come back in some weeks once my code will be ready.
Both of your implementations are too fragile
IMHO. Furthermore they serve no purpose because they only shortcut the conversion of similar units e.g. Miles to Feet or inches.... We ALL already do the same thing whenever we convert the terms of an equation...
@varocarbas, @milutinovici
Check out my examples above and @ http://net7mma.codeplex.com/SourceControl/latest#Concepts/Classes/Units.cs
To parse you can provide a static which can be exposed from the derived types if they desire so...
static bool Parse(UnitBase units, string value, int offset = 0, int count = -1, char[] symbols = null, System.Globalization.NumberStyles ns = System.Globalization.NumberStyles.None, System.Globalization.NumberFormatInfo nfi = null)
{
if ((units == null || units.Symbols == null) && symbols == null || string.IsNullOrWhiteSpace(value)) return false;
if (count < 0) count = value.Length - offset;
int symbolIndex = value.IndexOfAny(symbols ?? units.Symbols.SelectMany(s => s.ToArray()).ToArray(), offset, count);
if (symbolIndex < 0 || symbolIndex > count) return false;
if (units != null)
{
try
{
units.Units += Number.Parse(System.Text.Encoding.Default.GetBytes(value.ToCharArray(), symbolIndex, count), symbolIndex, count = value.Length - symbolIndex, System.Text.Encoding.Default, ns, nfi ?? System.Globalization.NumberFormatInfo.CurrentInfo);
}
catch
{
return false;
}
}
else
{
//Must check loaded types and get all Symbols from loaded types which derive from UnitBase or implement IUnit etc...
throw new NotImplementedException();
}
return true;
}
Perhaps instead of bickering on whose implementation is chosen we can collaborate to provide something which would be even better otherwise.
For doing thing like you guys want to do, e.g. "2m*2" you would need an 'Equation' class IMHO and it shouldn't be coupled to the Unit
at all, then parse the Units and then do the math if possible.
I have been working really hard on this library during the last weeks, but things have got slightly out of hand. That's why I cannot release it by the promised date (today).
I can anticipate that it is a quite good piece of software (I mean... at least, I like it pretty much) and that it has grown notably beyond the original expectations. This version is certainly not including-in-.NET-right-away material because of being too big. In any case, it will certainly deliver an excellent idea about what the approach I am proposing can do.
Regarding the other suggestions, I do want to do some tests and write here my detailed impressions and pros/cons. In any case and as said, I have been spending too much time on all of this lately and do need a pause from it. So, after releasing my version (in the next days; it is everything done already, just want to do some proper tests) + letting you know here, I will take some time (two weeks?) before coming back to compare it with the other versions.
@mellinoe @Toxantron @milutinovici
I have finished the first version of my library (UnitParser, as part of the more comprehensive to-be-built FlexibleParser). You can take a look at the main code, at a set of descriptive samples or download the DLL from here.
As said, things got a bit out of hand and I haven't had time to complete everything. For example: basic instructions or comprehensive sample application are still missing; also I will be further debugging/optimising the code during the next weeks. On the other hand, I am certainly happy with the overall result and see all these small issues as an unavoidable consequence of the huge complexity involved (better: the huge complexity of getting the kind of performance I was after).
You can already do some tests (simple steps: add the DLL to a project, refer the FlexibleParser namespace, declare a UnitP
variable and start parsing) or wait for me to complete the aforementioned pending issues ASAP (meaning not precisely right away, because of having to somehow compensate all the unplanned time I have been spending on this during the last weeks).
In a nutshell, you should be able to parse virtually anything (within the supported unit types which are quite a lot, as you can see here). For example: kg*m/s2
or N
or Mg*mm/s2
or J/m
or s*W/m
, input via string or arithmetic operations, are treated identically. Other relevant feature is that it is able to deal with really big/small numbers (notably beyond the range of the decimal
type, which this library uses almost exclusively).
This output is notably outside some of my original intentions. For example, including such a big code (but relatively small, on account of what it can do) in the .NET Framework is certainly out of picture. Additionally, this issue was initially focused on magnitudes by assuming some vectorial component; this part has been completely removed from my implementation. In any case, I don't see any serious drawback here because of the numerous positive aspects. Something is completely sure: this library will help anyone to fully understand the kind of performance which my proposal can deliver.
As already said, I will come back in some weeks with a somehow-detailed comparison including all the approaches referred in this issue.
@varocarbas I appreciate the update. However I only stumbled upon this thread by accident. I saw a chance to contribute an idea and did it. I have stopped following the discussion and have very little interest in its outcome.
I wish you lots of luck with your implementation. ;)
@Toxantron OK. Thanks for the clarification.
Best of luck with your future stumblings :)
Although my last comment doesn't seem to have triggered too much attention, I will go ahead and comment about the other approaches anyway.
First thing to highlight is that the stable v1 of UnitParser isn't still ready. In principle, it should be completed within the next 2 weeks; honestly, I am neither too sure nor in hurry, only interested in finishing an IMO-so-good-development as it deserves (= properly). In any case, the current version (readme, code and UnitParser.dll) is quite good already. Also there is a NuGet package, but it doesn't include the last improvements.
I started today my analysis of the other approaches and it has been much faster than what I was expecting; my conclusion was also much more radical than expected: not belonging even to the same category. Honestly, seeing such a big difference has been a bit surprising. On the other hand, bear in mind that I have a relevant background in mechanical/industrial engineering (including metrology, what all this is precisely about) and I was very interested in creating a top quality code (my first open source project, which I expect to be one of my main self-promotional resources). Note that I haven't spent too much time on analysing the codes, just the functionalities and how the whole situation is being faced.
I will start with the alternative with the highest number of references in this thread (GU.units). As commented above, its one-class-per-type structure seemed too heavy and unadaptable in my opinion. The size of its dll (almost 4 times bigger than mine) looks like a confirmation of such an assumption. I haven't done any speed test though; mainly because of the relevant differences between our both approaches, as explained below these lines.
From its readme file:
``` C#
private static LengthUnit m = LengthUnit.m;
private static TimeUnit s = TimeUnit.s;
[Test]
public void ArithmeticSample()
{
Length length = 1m;
Time time = 2s;
Speed speed = length/time;
Assert.AreEqual(0.5, speed.MetresPerSecond);
}
My version delivers something like:
``` C#
UnitP unitLength = new UnitP(1m, "m");
//or
unitLength = new UniP("1 m");
//or
unitLength = new UnitP(1m, Units.Metre);
//or
unitLength = new UnitP(1m, UnitSymbols.Metre);
//etc.
//Additionally, mine supports many different length units like...
unitLength = new UnitP(1m, Units.Mile);
unitLength = new UnitP(1m, Units.SurveyInch);
unitLength = new UnitP(1m, Units.Angstrom); //etc.
//Same thing for time.
UnitP unitTime = new UnitP("1 s"); // or new UnitP("1 min") or new UnitP("1 h"), etc.
//Length can also be divided by time and multiplied by numbers. A velocity is also generated.
unitLength = new UnitP("1 m") * 1.0;
unitTime = new UnitP("1 s") * 2.0 ;
UnitP speed = unitLength / unitTime;
if (speed == new UnitP("0.5 m/s")) // or new UnitP(0.5m, Units.MetrePerSecond) or etc.
{
//This condition is true.
}
//What mine can also do is dealing with numbers of any size (as big/small as required) without triggering an error unless expressly instructed by the user. Also it gracefully manages prefixes.
speed = new UniP("1 km") / new UnitP("1 ms"); //1000000 m/s
UnitP reallyBig = 99999999999999999999999999999999999999999999.999999999 * new UnitP("1 Ym") * double.MaxValue; //179769,31348623200000*10^347 Ym
//It can also convert automatically units which aren't good together (e.g., belonging to different systems).
UnitP speed2 = new UnitP("1 m") * new UnitP("1 ft"); //0.3048 m2 -> converts ft to SI, the system of the first operand.
//Logically, it can perfectly recognise all the units, types and systems (SI, Imperial, USCS or CGS) and perform all the conversions among them.
UnitSystems system = new UnitP("1 ft").UnitSystem; //ImperialAndUSCS
//It can also add/subtract same type units.
UnitP unitLength2 = new UnitP("1 m") + new UnitP("1 ft"); //1.3048 m yes, it also performed an automatic conversion here.
``` C#
[Test]
public void Sample()
{
var l = Length.FromCentimetres(1.2);
Assert.AreEqual(0.012, l.Metres);
}
Mine can also do quite a few things on this front.
``` C#
//As already shown, it performs automatic conversions when required; but also when instructed.
UnitP unitLength3 = new UnitP("1 m").ConvertCurrentUnitTo(Units.Inch); //39.37... m
//Prefixes aren't considered a conversion, but something which is being naturally used. It is possible to even use weird prefix-unit combinations.
UnitP kiloFoot = new UnitP("1 kft", PrefixUsageTypes.AllUnits);
//To not mention the powerful compound support which also works in conversions.
UnitP velocityUnit = new UnitP("1 km/h").ConvertCurrentUnitTo("mi/s"); //0.0001726.. mi/s
I hope that, at this point, the numerous differences between our both approaches are clear. If you have still doubts, you might want take a look at the test application which gives a quite good idea about what UnitParser can do.
Regarding Metric.NET (milutinovici's library), it is more similar to mine but still with huge differences.
It does rely on my 1-class approach and has similar-to-mine variable instantiation/operations. But it doesn't account for different systems/units/types or includes a good enough unit-recognition or accounts for complex situations (like parsing compounds/prefixes), etc. to not mention the error-management and values-of-any-size support.
None of the aforementioned libraries can take care of almost any of the sample cases included in my test application (i.e., descriptive-but-not-exhaustive summary of what my approach can deliver).
As said at the start, I haven't analysed their codes in depth and (just in case) I clarify that my intention isn't criticising anyone/anything. I am just showing the reality as it is: my approach doesn't even belong to the same group than these other two libraries.
I am pretty happy (and what the heck! proud) with my code so far (by bearing in mind that there is still some hard work ahead), with my approach to this problem and with what it represents (= the parsing methodology which I am planning to include in all the upcoming parts of FlexibleParser). I am happy with having started this thread because of representing the origin of this whole development. I also hope that other people have learned/enjoyed what has been shared here. In any case, I think that all this has gone quite outside the original scope.
Looking forward to some feedback and your (= .NET team) conclusions.
Thanks for the updates, @varocarbas ; glad to see you've gotten a working prototype up and running and looked into some alternative approaches! The new code examples do seem functional and fairly straightforward. I still think there are some conceptual and structural problems I have with the library (like the use of classes, some naming conventions, parsing, etc)., but really it is up to the developer and users of the library to decide what's best for them on some of those topics. There's a lot of different ways something like this could be designed and expressed in .NET, and like you said, it's difficult to directly compare them at times.
After all the above, are you still of the opinion that we should try to bring this functionality into the core libraries here in corefx? I still tend to think that this domain is a bit too niche and opinionated to belong in the BCL; theres just not a ton of applications that would use it, and people will most likely have their own preference for expressing these concepts. That's just speaking from my experiences and interests, though, obviously other folks work in different domains and need different domain libraries. To me, it makes the most sense as a third-party component that folks in the domain can go and grab, reference, and build on top of.
@mellinoe
some conceptual and structural problems I have with the library (like the use of classes, some naming conventions, parsing, etc)
If you face the analysis from an abstract, theoretically-perfect, making-sure-that-any-user-will-never-make-a-mistake perspective, you would certainly see lots of problems with my approach (e.g., many scenarios were wrong inputs are possible; although managing all the errors internally and only triggering exceptions when expressly instructed helps on this front). Additionally, bear in mind that this library doesn't follow all the .NET-framework code conventions (plainly, because it doesn't have to), as highlighted in some of the comments above. In any case, performing all the required adaptations on these fronts would be quite straightforward.
On the other hand, if you face the analysis by bearing in mind the huge complexity of the given reality (which this first version answers pretty well) and what a user (mainly, a power user) would expect, you shouldn't see too many problems. It even addresses quite a few concerns of less-knowledgeable users. For example, all the typing-something-wrong concerns can be easily avoided thanks to the string
constant nature of all the main symbols (e.g., you can do new UnitP("kg*m/s2")
or new UnitP(SIPrefixSymbols.Kilo + UnitSymbols.Gram + "*" + UnitSymbols.Metre + "/" + UnitSymbols.Second + "2")
or, well, just new UnitP("N")
& new UnitP(UnitSymbols.Newton)
). In any case, one thing is clear: this is a tool meant for power users, with enough knowledge and complex enough problems; neither a wide user base nor the kind of issues which a basic programming framework is expected to address.
After all the above, are you still of the opinion that we should try to bring this functionality into the core libraries here in corefx
I have certainly brought this development way beyond what is recommendable here. In any case and even by bearing in mind that a much more restricted version would have to be implemented, my position on this front is now slightly different. I have confirmed that creating this kind of approach by trying to keep everything as simple as possible is quite difficult; this format allows a wide variety of inputs, what systematically triggers questions like why stopping here? Why not addressing that scenario too? On the other hand, having in place a perfectly-working (still first version, but pretty powerful already; actually, I cannot think of too many extensions, other than fixing bugs) sample of a much more complex version is undoubtedly helpful to ease the development of the required much simpler approach.
Basically, this question is similar to "how to proceed regarding the inclusion of mathematical functions and constants? Most of users will just use the basic functionalities". That's why the answer should be similar to what the .NET framework did on this front: basic functionalities with lots of restrictions (perhaps more restrictions than required, as highlighted in one of my other CoreFX issues). That is, not implementing a comprehensive approach (power users rely on libraries for these purposes), but a set of basic features which might become handy at some point. For example, a new Units class
whose constructors only allow enum
s, which only supports the most commonly used units and types (e.g., m/s, kmh, mph, etc. user-defined alternatives might also be included), which allows conversions and basic operations and, most importantly, which enables the value+unit eventuality. It will also include Units.Parse
and Units.TryParse
supporting a relatively big number of alternatives (growing in complexity on this front is quite cheap; mainly when a properly-working version, like UnitParser, is in place). Including such a basic feature (by being very careful to avoid the aforementioned almost-forced complexity increases) might be worthy; its impact on the framework would certainly be very small.
In summary, I do think that a (highly) restricted version might be a good thing. Shall I go ahead and create a small CoreFX-acceptable sub-version of UnitParser to get a better idea? I would have it ready in around one week after the go-ahead.
Going to close this for now. I agree with @mellinoe's previous statement and I don't see this having gained much traction since then.
There are a lot of open ended questions still and I would ideally like to see a prototype of this library and is usage out in the wild before considering first class support in the .NET Libraries.
@tannergooding,
https://www.codeproject.com/Articles/578116/Complete-Managed-Media-Aggregation-Part-III-Quantu
I think it also can serve as a Unit
type once you have the IdealUnit
and IndirectionUnit
I know it works for the mass energy equivalence so well as I was equipped to test.
Most helpful comment
I don't think "up for grabs" makes sense for this, at least from how I think about the label. "Up for grabs" usually means there is a concrete pieces of work to be done, but nobody has committed to doing it yet. In this case, it seems like there isn't even a design locked down, and even if there was, someone would already be lined up to implement it.
As for the discussion above: I have skimmed it, but don't have incredibly strong opinions about this topic. It seems like the whole discussion of string parsing should be entirely orthogonal to how the interface is expressed in terms of the type system. I would expect that any comprehensive measurement manipulation library would include some way to deal with string representations.
The library that @svick linked above: https://github.com/JohanLarsson/Gu.Units, looks pretty intriguing, and is something that I would probably enjoy using if I had need of a library like this. Some things that look nice in the library:
Just from skimming that library, it seems to have a lot of desirable qualities that the proposal in the original topic doesn't. @Toxantron 's proposal seems sort of similar to Gu-Units, but a bit different as well. In my opinion, though, Gu-Units' approach seems to be the best, perhaps because it is the most fleshed out, and has an actual implementation.
That said, I'm skeptical that we will be able to find a "one-size-fits-all" API shape that will be suitable for the BCL for something like this. I feel that this may be a bit of an opinionated area to step into for the BCL (the discussion above is a bit of that), so I'm not sure we'll be able to find something that works well for everyone. Will we be able to add a lot of value by providing a library like this when things like Gu.Units are available for use already?