Rust: Tracking Issue for RFC 213: Default Type Parameter Fallback

Created on 28 Jul 2015  路  65Comments  路  Source: rust-lang/rust

EDIT: this issue has been stalled on disagreements about how to handle a nasty problem found during implementation. See the internals thread where this was detailed and discussed.


This is a tracking issue for RFC 213.

The initial implementation of this feature has landed.

cc @nikomatsakis

B-RFC-approved B-RFC-implemented B-unstable C-tracking-issue T-lang

Most helpful comment

I have a use case for this feature as well. Consider the following code:

#![feature(default_type_parameter_fallback)]
use std::path::Path;

fn func<P: AsRef<Path> = String>(p: Option<P>) {
    match p {
        None => { println!("None"); }
        Some(path) => { println!("{:?}", path.as_ref()); }
    }
}

fn main() {
    func(None);
}

Without default_type_parameter_fallback, the call in main would require a type annotation: func(None::<String>);.

Along similar lines, consider a function accepting an IntoIterator<Item=P>, where P: AsRef<Path>. If you want to pass iter::empty(), you have to give it an explicit type. With default_type_parameter_fallback, you can just pass iter::empty().

All 65 comments

What is the status of this?

@bluss it is still feature-gated, AFAICT. See e.g. the discussion on PR #26870 , or just look at this playpen

I am not sure what the planned schedule is for unfeature-gating it.

nominating for discussion.

This doesn't seem to be working properly.

#![crate_type = "lib"]
#![feature(default_type_parameter_fallback)]

trait A<T = Self> {
    fn a(t: &T) -> Self;
}

trait B<T = Self> {
    fn b(&self) -> T;
}

impl<U, T = U> B<T> for U
    where T: A<U>
{
    fn b(&self) -> T {
        T::a(self)
    }
}

struct X(u8);

impl A for X {
    fn a(x: &X) -> X {
        X(x.0)
    }
}

fn f(x: &X) {
    x.b(); // ok
}

fn g(x: &X) {
    let x = x.b();
    x.0; // error: the type of this value must be known in this context
}

@mahkoh there is a necessary patch that hasn't gotten rebased since I stopped my summer internship. I've been unfortunately busy with real life stuff, looks like @nikomatsakis has plans for landing a slightly different version according to a recent post of his on the corresponding documentation issue for this feature.

@nikomatsakis I know the lang team didn't see any future in this feature, will you put that on record in the issue :smile:?

One example where this feature seems to be the only way out is the following concrete example of API evolution in libstd.

Option<T> implements PartialEq today, but we would like to extend it to PartialEq<Option<U>> where T: PartialEq<U>. It appears this feature can solve the type inference regressions that would otherwise occur (and might block us from doing this oft-requested improvement of Option).

@bluss I HAVE been dubious of this feature, but I'm been slowly reconsidering. @aturon is supposed to be doing some exploration of this whole space and writing up some detailed thoughts. I actually started rebasing @jroesch's dead branch to implement the desired semantics and making some progress there too, but I've been distracted.

One advantage of finishing up the impl is that it would let us experiment with extensions like the one you describe to see how backwards compatible they truly are -- one problem with fallback is that it is not ACTUALLY backwards compatible, because of the possibility of competing incompatible fallbacks.

That said I still have my doubts :)

Another example where this could be useful -- basically the same example as petgraph -- is adding allocators to collections in some smooth way.

What are the drawbacks to turning this on? It seems to mainly make things compile that otherwise cannot infer enough type information.

I have a pretty good use for this too. It's basically what @bluss mentioned, adding new types to an impl while avoiding breaking inference on existing usage.

Is the only issue with this the interaction with numeric fallback? I like default type parameters a lot. I often use them when I parameterize a type which has only one production instantiation, for mocking and to enforce bondaries. Its inconsistent and for me unpleasant that defaults don't work for the type parameters of functions.

I have a use case for this feature as well. Consider the following code:

#![feature(default_type_parameter_fallback)]
use std::path::Path;

fn func<P: AsRef<Path> = String>(p: Option<P>) {
    match p {
        None => { println!("None"); }
        Some(path) => { println!("{:?}", path.as_ref()); }
    }
}

fn main() {
    func(None);
}

Without default_type_parameter_fallback, the call in main would require a type annotation: func(None::<String>);.

Along similar lines, consider a function accepting an IntoIterator<Item=P>, where P: AsRef<Path>. If you want to pass iter::empty(), you have to give it an explicit type. With default_type_parameter_fallback, you can just pass iter::empty().

Note: In case anyone hit the "type macros are experimental" error in 1.8.0, the type_macros feature is tracked at #27245 (the wrong number has been corrected in 1.9.0 with #32516).

I've found this feature immensely helpful and would love to see it stabilized.

I think this feature is very useful for an ergonomic Read/Write with an associated error type. See https://github.com/QuiltOS/core-io/commit/4296d87ffaa3c2fe5e84bc5e01c0838fc596129e for how this was formerly done.

If we had this feature, possibly we could have slice::sort take a type parameter to provide the sorting algorithm.

So @eddyb floated an interesting idea for how to have this feature without the forwards compatibility hazards. I'm not honestly sure how much what has been written about and where -- but in general there is an obvious problem when you introduce "fallback" that it could happen that a given type variable has multiple fallbacks which apply. This means that introducing a new type parameter with a fallback can easily still be a breaking change, no matter what else we do. This is (one of) the reasons that my enthusiasm for this feature has dulled a bit.

You can see this problem immediately when you consider the interaction with the i32 fallback we have for integers. Imagine you have foo(22) where the function foo is defined as fn foo<T>(t: T). Currently T will be i32. But if you change foo to fn foo<T=u32>(t: T), then what should T be? There are now two potentially applicable defaults: i32 and u32.

The idea that @eddyb had was basically to be _more_ conservative around defaults. In particular, the idea was that we would limit defaults to type declarations (iirc) and not to fns. I'm having trouble recalling the precise plan he put forward -- it was quickly and over IRC -- but iirc the idea was that it would be an error if to have a type variable that had no default mix with one that had some other default. All the type variables would have to have the same default.

So e.g. you could add an allocator parameter A to various data-structures like Vec and HashMap:

struct Vec<T, A=GlobalAlocator> { ... }
struct HashMap<K, V, A=GlobalAllocator> { ... }

so long as you are consistent about using the same default for allocators in every other place that you add them, since otherwise you risk having defaults that disagree. Don't have time to do a detailed write-up, and I'm probably getting something a bit wrong. Perhaps @eddyb can explain it better.

The gist of the idea is that _before_ applying defaults, we ensure that everything which _could_ have defaults added in the future was _already_ inferred (i.e. as we error now for unbound inference variables).

So if you had struct Foo<T, A>(Box<T, A>); (and Box had a default for A), let _ = Foo(box x); would _always_ be an error, as the result of inference would change if a default for A was added to Foo.

Your options would be struct Foo<T>(Box<T>); or struct Foo<T, A=GlobalAllocator>(Box<T, A>);: any other default would be useless during inference because it would conflict with Box's default.

This scheme works with both allocators and hashers AFAICT, since you can just use the "global default" everywhere you want to make it configurable, and there are likely more usecases out there like that.

The catch is that you have to limit from the start the possible locations of defaults you'll take into consideration to "complete" type inference, and allowing them on more than type definitions would require a lot of duplication of the same default everywhere, but there may be a reason to do that.

@nikomatsakis We could "inherit" defaults from Self's type parameters in inherent impls and allow defaults everywhere else, what do you think?
It seems rare that you have fully parametric free fns working with a container type that wants defaults.

We could also "inherit" defaults everywhere, from user type definitions and forbid having your own defaults for type parameters that end up being used in types which could have defaults in the future.

Such an adjustment would make this viable even in @withoutboats' <[T]>::sort situation.

You can see this problem immediately when you consider the interaction with the i32 fallback we have for integers. Imagine you have foo(22) where the function foo is defined as fn foo<T>(t: T). Currently T will be i32. But if you change foo to fn foo<T=u32>(t: T), then what should T be? There are now two potentially applicable defaults: i32 and u32.

I don't know, it seems to me like adding a default to an existing type parameter just ought to be a breaking change (and it also seems to me that T should be u32)?

Ideally, it should be possible to go from a specific type to a type parameter with a default without a breaking change. That would allow generalizing a function.

@withoutboats The problem is that then the effects on existing code can be silent and unpredictable, and that is the reason progress on this issue has been stalled.
And if adding defaults is a breaking change, we can't use them for API evolution.

@eddyb @joshtriplett I don't see how going from a type to a type parameter with a default to that type could be a breaking change. I'm saying that going from a type parameter without a default to a parameter without a default should just be defined as a breaking change. That is:

// Not breaking, as far as I can tell (unless we define integer fallback to have higher precedence)

// From:
fn foo(x: u32) { }
// To:
fn foo<T=u32>(x: T) { }

// Breaking (but that's okay IMO)

// From:
fn foo<T>(x: T) { }
// To:
fn foo<T=u32>(x: T) { }

@withoutboats I can imagine cases where going from a concrete type to a type parameter could cause exactly the same breakage.

Consider the following code:

fn foo(x: SomeOtherType) { ... }
fn bar<T=SomeType>() -> T { ... }

fn main() {
    let x = bar();
    foo(x);
}

That code should compile, inferring x to have type SomeOtherType, and thus inferring bar's T as SomeOtherType. However, if you change foo to:

fn foo<T=SomeOtherType>(x: T) { ... }

then that code will fail to compile, because it can't unambiguously infer the type of x.

(In real-world instances of this, bar will likely also require that T implement some trait, so bar can do something useful with it; that doesn't change this example as long as SomeOtherType also implements the same trait.)

A common real-world example of this would involve various combinations of OsString, Path, and similar string types, and attempting to generalize a function that takes one of those types. For instance, I'd like to be able to generalize a function that takes a &Path to a function that takes an AsRef<Path> and defaults to &Path. But that _could_ generate breakage and type inference failures.

@withoutboats The concrete type could have unified with an unlimited number of type variables that now remain unbound. If any of them have defaults, they can now conflict, and no universal priority ordering can really exist. If they don't, then they're not future-proof.

Enabling defaulting for everyone has the same effect as adding a type parameter with a default (e.g. for HashMap's hasher), although I suppose that still doesn't require that adding a default to an existing type parameter is not a breaking change.

@eddyb @joshtriplett That makes sense, thanks.

Adding a defaulted parameter without changing the type of any input or output variable can't be a breaking change though, no? Thinking of slice::sort here.

@withoutboats Yes, that is a very fortunate case of a type parameter that can only be explicitly provided or inferred as its default, not bound to anything else.

Another use case: I'd really like to move HashMap/HashSet to libcollections from libstd. The problem is RandomState and in particular HashMap<K,V,RandomState>::new. RandomState needs to live in libstd, but you can't split the impl like that between libcollections and libstd. So one could try to have a generic implementation in libcollections and a type alias in libstd like type HashMap<K,V,S=RandomState> = core_collections::HashMap<K,V,S> combined with a impl of HashMap<K,V,BuildHasher+Default>::new in libcollections. This however results in the exact inference problem that this feature is designed to solve.

@eddyb to clarify some points about your suggestion:

  • it must be enforced at usage sites, not in definitions? In your example, it is let _ = Foo(box x) that is the problem, specifically if the type of x cannot be fully inferred?
  • do you intend to forbid defaults for functions entirely? Could you give an example of functions with defaults which are more problematic than types?
  • could you explain why using Self's types makes the sort example work?
  • adding a default to an existing type parameter could still be a breaking change?
  • it must be enforced at usage sites, not in definitions? In your example, it is let _ = Foo(box x) that is the problem, specifically if the type of x cannot be fully inferred?

Yes, however, the enforcement on uses is more permissive than today's type-checking, i.e. it would only allow more code to compile. And it's not the type of x, but the fact that box x could have any A, and the lack of a default on Foo represents a hazard.

  • do you intend to forbid defaults for functions entirely? Could you give an example of functions with defaults which are more problematic than types?

It's not functions _with_ defaults, it's functions with _any_ type parameters that would cause problems in such as scheme. Without their own defaults, matching everything else (or automatically deduced), they would prevent _any_ inference variable they come into contact from getting its defaults from _anywhere_.

  • could you explain why using Self's types makes the sort example work?

Not just Self, but _everything_. The sort example would be helped by taking into account defaults of type parameters of types used in the signature _and_ the fact that the type parameter would _not_ be used in the signature, e.g.:

impl<T> [T] {
    pub fn sort<A: Algorithm = MergeSort>(&mut self) {...}
}

In such a definition, A can either be explicitly provided _or_ defaulted to MergeSort.
It's _very_ important that A _can't_ be inferred from _anywhere_ else, which results in 0 hazards.

  • adding a default to an existing type parameter could still be a breaking change?

I don't see how. All the code that had compiled without the default succeeded in inference _without_ applying _any_ defaults, so it couldn't _ever_ see the new default.

Thanks for the explanations!

It's not functions with defaults, it's functions with any type parameters that would cause problems in such as scheme. Without their own defaults, matching everything else (or automatically deduced), they would prevent any inference variable they come into contact from getting its defaults from anywhere.

So, we apply the same rules to type parameters on both functions and types?

And to summarise your rule, is it accurate to say "wherever we perform inference, if an inference variable has a default, then it is an error if that variable unifies with any other inference variables, unless they have the same default or the variable also unifies with a concrete type"?

@nrc Well, we _could_ apply the same rules, we could deny/ignore defaults on functions, or we could make functions go somewhere in between, using defaults that we can gather from their signature, combined with their own defaults, in cases where they would be unambiguous.
I tried to list the options, and I prefer the hybrid rule for functions, but it's just one option of several.

As for your formulation, if you apply that rule after inference stops at a fix-point, it would amount in "for all remaining unassigned inferences variables, first error for any without a default" (because they'd never unify with anything that's not caused by applying a default), "and then apply all defaults at the same time" (with conflicting defaults causing type mismatch errors).
So yes, I believe that is correct, it seems to be equivalent to the algorithm I had in mind.

_However_, if you try to apply defaults _before_ inference has given up, or your "unifies with a concrete type" can use types that were a side-effect of applying defaults, the compatibility hazards remain.

I have this convenient command executor which deals with all the things that might go wrong including checking the exit code.
Obviously in 99% of cases non-zero exit code means an error, but I'd like to cover this 1% without resorting to always requiring some exitcode validator.

I'm reading that default_type_parameter_fallback is not likely to be stabilized, is there some other way to model those 99/1 use cases without resorting to something like fn execute_command_with_custom_exit_code_validator(...)?

pub trait ExitCodeValidator {
    fn validate(exit_code: i32) -> bool;
}

pub struct DefaultExitCodeValidator;

impl ExitCodeValidator for DefaultExitCodeValidator {
    fn validate(exit_code: i32) -> bool {
        return exit_code == 0;
    }
}

pub fn execute_command<T, V=DefaultExitCodeValidator>(cmd: &mut Command) -> Result<T>
where V: ExitCodeValidator,
{
    let output = ...;
    let exit_code = ...;

    if V::validate(exit_code) {
        Ok(output)
    } else {
        Err(InvalidExitCode(exit_code))
    }
}

Correct me if I'm wrong, but the behavior being discussed here seems to be available on both stable and nightly (without a warning). Even though defaulted type parameters for functions are feature-flagged, it's possible to emulate this behavior by making a function impl on a struct with a default type parameter.

This behavior can be seen here. Am I mistaken in thinking that this is exactly the behavior this conversation is intending to prevent? I've changed i8 from a concrete type to a default, and it resulted in a change of the type of the resulting value.

Edit: I was in fact mistaken! I managed to confuse myself-- what's actually happening here is that _none_ of the default type parameters are being applied. The only reason the line with Withi8Default::new compiles is that integers default to i32. In this case, all Withi8Default::new and Withi64Default::new are just the identity function. The default type parameters on the struct only apply when using struct literal syntax, not when calling associated functions (as they should).

I see. I guess this does not clutter the API that much and still monomorphises the call, so you don't have to pay for indirection as with builder pattern alternative. Cool, thanks for pointing this out.

@jsen- I think you misunderstood my point. I had thought I had discovered a bug-- default type parameters _shouldn't_ be usable on functions (currently). However, I was just mistaking numeric defaults for something more sinister.

@jsen- it doesn't directly help you, but it occurs to me that if we had default fn parameters, one could (maybe?) write:

pub trait ExitCodeValidator {
    fn validate(&self, exit_code: i32) -> bool;
}

pub struct DefaultExitCodeValidator;

impl ExitCodeValidator for DefaultExitCodeValidator {
    fn validate(&self, exit_code: i32) -> bool {
        return exit_code == 0;
    }
}

pub fn execute_command<T, V>(
    cmd: &mut Command,
    validator: V = DefaultExitCodeValidator)
    -> Result<T>
where V: ExitCodeValidator,
{
    let output = ...;
    let exit_code = ...;

    if validator.validate(exit_code) {
        Ok(output)
    } else {
        Err(InvalidExitCode(exit_code))
    }
}

This seems better than your existing code because it allows an exit code validator to carry some state. But of course it relies on a future that's not implemented, and in particular on the ability to instantiate a default parameter of generic type.

@nikomatsakis right, I just shortened my current implementation, which relies on default_type_parameter_fallback.
I'd sure love to see default function arguments, but I'm not even aware of any plans to include them in the language (I'll sure look around in the RFC section 馃槂)
I posted the question because we need to move to stable some day 馃槈
Edit: looking again at your example, this time with with both eyes open (it's after midnight here already 馃槅), you didn't mean default function arguments. But my goal is not specify the Validator at all in the usual case. But I'm probably overthinking it, because I agree with explicit is better than explicit

@cramertj yeah, I was confused. Based on your comment I got an idea, that I could emulate it with _struct_ defaulted type params.
Something like this:

// defaulted:
Exec::execute_command(cmd);
// explicit:
Exec::<MyExitCodeValidator>::execute_command(cmd);

...but I'm finding this is not be possible as well. I'm not yet fully shifted away from C++'s meta-programming. Definitely still need to learn a lot about rust.

@jsen- Try <Exec>::execute_command(cmd) (that way the path is in a type context, not expression context, and the defaults are forcefully applied).

@eddyb thanks a lot, that worked perfectly
Here's a link to somewhat contrived, but working example if that helps anyone who finds this

adding a link to this (old) internals thread for posterity:

https://internals.rust-lang.org/t/interaction-of-user-defined-and-integral-fallbacks-with-inference/2496

it discusses the interaction of RFC 213 with integral fallback, though similar issues can also arise without using integers, if you just have multiple functions with competing fallback.

It's really unfortunate that this is blocking #32838. Is there anything I can do to help push this forward, outside of implementation work?

I don't think it's blocking that. We can always just newtype all of collections in std, and move hashmap into collections while we are at it.

It's an ugly solution, but only a temporary once. The allocator traits are far enough behind schedule that we should seriously consider it.

@Gankro I think we're still sort of in need of a good survey of the space of ideas and a nice summary. That would be helpful.

I have a few more cases where this is really useful. In particular I have a situation where I have a generic function that requires type inference for the return value. However if there is no user of the return value it requires annotations even though I would have perfectly sensible defaults that then get optimized away.

Think of it like this:

// currently required
let _: () = my_generic_function()?;

// with a unit default
my_generic_function()?;

I'm no expert but I'll try to summarize and contribute.

We want to use type defaults to inform inference. However we may get multiple defaults that could apply to an inference variable, resulting in a conflict. Also new defaults may be added in the future, causing conflicts where there was none.

@eddyb proposes the conservative approach of erroing on conflict and erroing when trying to apply a default to a type with no default, for future-proofing. The consequence is that all defaults must be present and must agree to be applied.

Let's take the reasonable and useful example by @joshtriplett. It would not work because it's trying to unify T in struct Option<T> with P in fn func<P: AsRef<Path> = String>(p: Option<P>). Even if Option<T> were to gain a default for T, intuitively func should not care because it's default is _more local_ and should take precedence.

So I propose that _local defaults trump type defaults_. So fn and impl defaults would take precedence over the default on the type, meaning that any future conflict between T and P above would always be solved by using the default on P, making it future-proof to apply P's default which is String.

Note that this is limited to literals, if the value came from another fn as in:

fn noner<T>() -> Option<T> { None }

fn main() {
    // func is same as in original example.
    func(noner());
}

Then we are back to an error.

This a small yet useful extension, together with the slice::sort example this makes a case for allowing type defaults in impls and fns. Hopefully we are coming to a _minimal but useful_ version of this feature that we could stabilize.

Edit: If we give fns and impls preference on the defaults, then we should not inherit the default from the type as that would defeat the point of adding the default being non-breaking.

Edit 2:

no universal priority ordering can really exist

Given a set of type variables in context that could be unified by a default, can't we order those variables by the order they are introduced? And then consider them in order, trying their defaults and failing at the first variable with no default.

Found this issue when I was looking into RFC.

I noticed recently a problem for myself when I have default type paramer, but my impl is generic.
So currently I have two options:

  • impl for struct with default type parameter
  • Ask users to use a bit ugly <Struct> syntax to guarantee default type paramer.

I wonder if it would be sensible to allow impl blocks to have default type parameters while retaining ability to specify trait boundry.

e.g.

pub trait MyTrait {}

struct MyStruct;
impl MyTrait for MyStruct;

struct API<M: MyStruct> {
....
}

impl<M: MyTrait=MyStruct> for API<M> {
    pub fn new() -> Self<M> {
    }
}

I may understand reasoning for impl blocks to overshadow default parameter when invoking API::new() but it is a bit sad when you cannot clearly set default type parameter for impl block alongside your struct.

Could this be fast tracked? Having better type inference would allow better user-experience for collections with custom allocators. https://github.com/rust-lang/rust/issues/42774#issuecomment-464367839

The proposal is in currently in a zombie state due to disagreements among the lang-team over how to resolve issues that were discovered during implementation. No progress has been made as it is otherwise regarded as low priority.

Maybe that priority should be re-evaluated? I think this is blocking various library improvements such as std-compatible HashMap in alloc, custom allocators, etc.

I'm not aware of any reason custom allocators would be blocked on default-affects-inference. They can be added backwards-compatibly in the same way hashers currently work -- ::new and friends would hardcode the default, and Default::default/with_allocator would be properly generic.

Is this the same issue I'm running into here?

pub struct Foo<T: Default = u64>(T);

impl<T: Default> Foo<T> {
    pub fn make() -> Foo<T> {
        Foo(T::default())
    }
}

fn main() {
    // error[E0283]: type annotations required: cannot resolve `_: std::default::Default`
    let foo = Foo::make();
}

Yes.

@Gankro

I'm not aware of any reason custom allocators would be blocked on default-affects-inference. They can be added backwards-compatibly in the same way hashers currently work -- ::new and friends would hardcode the default, and Default::default/with_allocator would be properly generic.

The default hasher RandomState for HashMap cannot go in alloc because it accesses os-specific sources of randomness. (https://doc.rust-lang.org/src/std/collections/hash/map.rs.html#3165-3186) We therefore need to define the default "after the fact". https://github.com/rust-lang/rfcs/pull/2492 is the only way I know to get around this.

I'd also like to define collections prior to the notion of global allocation being introduced, which would piggyback on the solution for RandomState. This, of course, is not a blocker.

So, I don't really chip in (I'm not good with language design), but for the RandomState issue could we just use the RDRAND/RDSEED CPU instructions when no OS is available to supply a source of randomness? You could evn have a panic (or possibly compile-time check) to determine if those instructions are supported and, if not, either fail with an error or panic.

What is the current status of this feature? What issues would need to be resolved to make this possible to stabilize?

The internals thread linked from the top post of this issue, regarding interaction with integer literal inference, seems to have come to a conclusion. What other blockers exist?

My take is that this feature is sort of back to "square one". I don't remember how much of the implementation exists, but I at least still have some strong concerns about "open ended" type parameter fallback. I'm not opposed to somebody picking up and trying to explore the issue again, but I would want to kind of "start over" by reviewing the use cases -- as concrete as possible! -- and the 'palette' of solutions available.

I think we did rip out some of the code related to this at some point, as well.

I think we should "unaccept" the RFC to make it clear this needs to be redesigned from the ground up, go through RFCs again.

I would be in favor of that, and removing whatever code remains. Sort of like @Mark-Simulacrum did for the x <- y operator.

I think that was actually @aidanhs :)

I would personally also be in favor of closing tracking issues in favor of new RFCs where there is significant design work remaining and the work is not actively in cache for some group of people.

Yes, you're right, sorry @aidanhs =)

I'm going to close this as there seems to be a agreement to close above and there hasn't been any activity since.

For anyone following this who may wish to start a new effort to solve it (which should probably begin with a lang MCP), one of the biggest use cases is adding a new type parameter to an existing type, without breaking semver compatibility.

one of the biggest use cases is adding a new type parameter to an existing type, without breaking semver compatibility.

Presumably there's some overlap here with https://github.com/rust-lang/wg-allocators/issues/2, though I haven't been following this issue closely.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

withoutboats picture withoutboats  路  213Comments

Centril picture Centril  路  382Comments

cramertj picture cramertj  路  512Comments

GuillaumeGomez picture GuillaumeGomez  路  300Comments

Leo1003 picture Leo1003  路  898Comments