Can't remap tuple keys to object keys in mapped types
🔎 Search Terms
mapped tuple, mapped types
🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about mapped types
⏯ Playground Link
https://www.typescriptlang.org/play?ts=5.2.2#code/CYUwxgNghgTiAEkoGdnwCLgPahgHgBUA+eAbwCh4r5QwcQAKWGKATwCEBXAM25BgBc8AIIwWHHnxgBKIQQDc5AL7lyAF1YAHBADUoETiAJaQAeW6F4IAB5qQAO2BpMdXHij3WREgF54BK1sHJwxsNwBLeyl4ACUSAH5Y+CF7EAA3fkV1E3gAZTUYTjA1TGQwGHDNNSwYeD8YkChgLHsIVlFxPDgmlrb4AG0AaxBWIWQCyIBzABp4DW0hF3p8Dy8AXSIs+YR8wuLjbQAxGrxS8srq2ps7RzRdopKQMoqqmt8ySmp+gGl4SPgGMNWFhuKFnhcatJ4AAyeD2TgAWwARvx4CgwedXjAfmt+gAGNZrIR6AxGEzmU5PTGXHH9ACMG0USiySFQeQKDyWbjo9nGGJel0CNxC92KZwFbyFwWcYX4eFFagOIGO+HFEJg3g+1EQLXGe0uDFA4KxiypEpkWu11GQnG0MAY0kU2pUKnItGgcB1vLUfwAzAAmLn8U2uOXw5H8TZu8AehA8vnhABsABYg4JQqH8EjwpNImoo+R4z69ZzZbU-KkAO7s-Vphj9T5UfoAIigzdm4QDabW00bA2bSPbfxT3d7a0dqiL8BLxTq045YrLADpaPQGFWRGI2FxePwGHS8XjpBPC7qfWkUoiUeX5-ql1BFEA
💻 Code
declare class Decoder<T> {
decode(arrayBuffer: ArrayBuffer): T;
}
type ValueTypeOf<T extends Decoder<any>> = T extends Decoder<infer R> ? R : never;
type StructDescriptor = ReadonlyArray<readonly [key: string, type: Decoder<any>]>;
type StructTypeFor<Descriptor extends StructDescriptor> = {
[K in (keyof Descriptor) & number as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
class StructDecoder<const Descriptor extends StructDescriptor> extends Decoder<StructTypeFor<Descriptor>> {
constructor(descriptor: Descriptor) {
super();
}
}
declare const i32Decoder: Decoder<number>;
declare const i64Decoder: Decoder<bigint>;
const structDecoder = new StructDecoder([
["a", i32Decoder],
["b", i64Decoder],
]);
const struct = structDecoder.decode(new ArrayBuffer(100));
// I expected this would work, but it does not
const v: number = struct.a;
🙁 Actual behavior
The type of struct.a has type number | bigint, however it should only have type number.
🙂 Expected behavior
struct.a should have type number (and similarly struct.b should have type bigint).
Additional information about the issue
This does work when using a record instead, however this isn't as useful as I want the descriptor to still be an array so I can iterate over it.
May be related to https://github.com/microsoft/TypeScript/issues/27995.
Workaround: Do not use keyof SomeTuple & number as a mapped index, instead, use a utility type to get the correct numeric index type of the tuple.
type KeyofTuple<T extends readonly any[]> =
Exclude<keyof T, keyof []> extends infer StringIndex
? StringIndex extends `${infer NumericIndex extends number}`
? NumericIndex
: never
: never;
type StructTypeFor<Descriptor extends StructDescriptor> = {
- [K in (keyof Descriptor) & number as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
+ [K in KeyofTuple<Descriptor> as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
I think that perhaps this could work (but it doesn't):
type StructTypeFor<Descriptor extends StructDescriptor> = {
[K in keyof Descriptor as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
I might play with implementing a change that would allow this if I find the time for this next week.
You should use & `${number}` instead of & number. That's because 0, 1, etc are string keys containing a number, not numbers.
type StructTypeFor<Descriptor extends StructDescriptor> = {
[K in keyof Descriptor & `${number}` as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
My own understanding of what is going on so far:
We have a mapped type called StructTypeFor<Descriptor>. The return type of structDecoder.decode(...), i.e. the type of struct, is that mapped type instantiated with Descriptor = readonly [readonly ["a", Decoder<number>], readonly ["b", Decoder<bigint>]], which is the type inferred from the argument passed to structDecoder.decode(...).
Depending on how you write the mapped type StructTypeFor<Descriptor>, you get different types for struct.
- If you write the mapped type like this:
type StructTypeFor<Descriptor extends StructDescriptor> = {
[K in (keyof Descriptor) & number as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
Then, when we resolve the mapped type with Descriptor = readonly [readonly ["a", Decoder<number>], readonly ["b", Decoder<bigint>]], K will range over (keyof Descriptor) & number, but what does that intersection resolve to?
First, keyof Descriptor is going to be a union of:
-
"0", becauseDescriptorends up being a tuple type of length 2 -
"1", also from the tuple type of length 2 -
"length","toString","map","filter","reduce", etc, all the array methods, because a tuple is an array after all, so those methods get inherited by tuple types -
number, from thenumberindex signature[^1] present in arrays, again because a tuple is an array (* this part doesn't entirely make sense to me... why do we need a fixed-length tuple type to have anumberindex signature?)
This union, when intersected with number, results in number, and that's what K ends up ranging over.
This results in the properties of our mapped type being Descriptor[number][0] = "a" | "b", and the types of those properties is ValueTypeOf<Descriptor[K][1]> = ValueTypeOf<Descriptor[number][1]> = ValueTypeOf<Decoder<number> | Decoder<bigint>> = number | bigint. (* this is an approximation that omits some details).
The result is that the type of struct resolves to { "a": number | bigint, "b": number | bigint }.
- If you write the mapped type like this:
type StructTypeFor<Descriptor extends StructDescriptor> = {
[K in (keyof Descriptor) & `${number}` as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
then roughly the same will happen, except K now will range over (keyof Descriptor) & `${number}` .
keyof Descriptor resolves to the same union described above, i.e. "0" | "1" | "length" | "toString" | ... | number.
When that is intersected with `${number}`, the result is "0" | "1", and that's what K ranges over.
So the properties of the resolved mapped type will be Descriptor["0"][0] and Descriptor["1"][0], which are respectively "a" and "b".
When we resolve the type of a property of the mapped type, say the type of the property for when K is "0", we resolve ValueTypeOf<Descriptor[K][1]> = ValueTypeOf<Descriptor["0"][1]> = ValueTypeOf<Decoder<number>> = number.
The result is that the type of struct resolves to { "a": number, "b": bigint }.
- If you write the mapped type like this:
type StructTypeFor<Descriptor extends StructDescriptor> = {
[K in keyof Descriptor as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
then you get errors on Descriptor[K][0], saying that "Type '0' cannot be used to index type 'Descriptor[K]'.", and that "Type 'Descriptor[K][0]' is not assignable to type 'string | number | symbol'."
Note that we don't get the same error on Descriptor[K][1] occurring ValueTypeOf<Descriptor[K][1]>, because as of #48837, since K ranges over keyof Descriptor and Descriptor is an array or tuple type, K has an implicit constraint of number | `${number}` in ValueTypeOf<Descriptor[K][1]>.
So we can read that part of our mapped type declaration as ValueTypeOf<Descriptor[K & (number | `${number}`)][1]>.
Ignoring that error, then what happens is similar to to the first case, except that K will range over all properties of Descriptor now. When K is one of the array methods, then Descriptor[K][0] will resolve to unknown, and therefore those array methods don't contribute to the properties of the resolved type, so we're left with K ranging over "0" | "1" | number to produce the properties of the resolved type.
When we are resolving the types of the properties of the resolved mapped type, we resolve ValueTypeOf<Descriptor[K][1]>, and that ends up resolving to number | bigint via a similar process to the first case listed above.
Some things I don't understand or that bother me here:
-
A tuple type of a fixed, known length still has a
numberindex signature, and that leads to us mapping over this index signature in a mapped type, and I find this surprising. -
Adding an intersection to the
K in keyof Descriptorwithnumberand`${number}`has different behavior, and the distinction seems easy to overlook. Intersecting with`${number}`works because we represent the tuple properties as"0","1", etc, i.e. as numeric string literals. But couldn't we also represent those properties as0,1, etc? It makes some sense to think so, because we do index arrays/tuples with numbers. I know ultimately, at run time, the numbers are converted to strings to access the array... The other part of one way working while the other doesn't is that intersecting with`${number}`also gets rid of thenumbersignature index, sincenumber & `${number}`isnever. But couldn't the index signature for arrays/tuples use`${number}`instead? I.e. couldn't it be[n: `${number}`]: SomeType? -
When you write the mapped type with
{ [K in keyof Descriptor as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]> }, you get an error onDescriptor[K][0]because we thinkKcan range over any property ofDescriptor, including the array methods. But you don't get a similar error onValueTypeOf<Descriptor[K][1]>, because that occurrence ofKis implicitly constrained tonumber | `${number}`since #48837. That distinction seems to me like an overlook, and I think there shouldn't be an error there. That distinction is fixed by part of #55774 (though that PR also implements other things).
[^1]: by "number index signature" I mean here an index signature that looks like [n: number]: SomeType.
A tuple type of a fixed, known length still has a number index signature, and that leads to us mapping over this index signature in a mapped type, and I find this surprising.
I suspect that not having that would result in some questionable DX:
const tuple = ['', 10] as const
function getX(i: number) {
return tuple[i] // would be an error
}
For improved type safety of this access, you can opt into noUncheckedIndexedAccess. So with that option in mind, there is nothing quite wrong with having that number index. That option is not the default though.
Adding an intersection to the K in keyof Descriptor with number and
${number}has different behavior, and the distinction seems easy to overlook.
This ☝️ That's why I decided to open my PR - with it you don't even need to use an intersection so I think it's an improvement since one doesn't have to even consider what's the correct way to do this intersection. They can just use the builtin language features to achieve the desired outcome.
But couldn't we also represent those properties as 0, 1, etc?
That's (kinda) what I tried in https://github.com/microsoft/TypeScript/pull/48599 and that PR ultimately led to https://github.com/microsoft/TypeScript/pull/48837
I know ultimately, at run time, the numbers are converted to strings to access the array...
Yes, from that PoV TypeScript representation is correct. I don't think it's super pragmatic though :P but certainly, there are also other considerations here beyond just arrays. number/string indexers have at times weird behaviors/overlaps between each other.
But couldn't the index signature for arrays/tuples use
${number}instead? I.e. couldn't it be [n:${number}]: SomeType?
It's worth noting that then arrays/tuples would have to have both index signatures because ${number} would reject plain numbers.
I think for now we don't really want to change the rules regarding (homomorphic) mapped types instantiated with array or tuple types.
There's already a way to express what this issue asks for, as suggested above:
You should use
& `${number}`instead of& number. That's because0,1, etc are string keys containing a number, not numbers.type StructTypeFor<Descriptor extends StructDescriptor> = { [K in keyof Descriptor & `${number}` as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>; };
The current rule we have landed on is to instantiate mapped types with array and tuple types in a special way when the mapped type is homomorphic and has no as clause, i.e. mapped types that look like { [K in keyof T]: SomeType<T, K> }.
That means that, if you have such a mapped type instantiated with an array or tuple type:
type Mapped<T> = { [K in keyof T]: SomeType<T, K> };
type MappedTuple = Mapped<[1, 2, 3]>; // [SomeType<T, K>, SomeType<T, K>, SomeType<T, K>]
then only the element properties of the type are considered when mapping (e.g. K will range over "0", "1", "2" in MappedTuple), so the "shape" (or the "array-ness", if you will) of the input type is preserved and the result produced is also an array or tuple type.
I think this rule makes it clear when the "array-ness" of the input type is preserved by a mapped type, and it does what users expect. Compare that to when a mapped type is homomorphic but has an as clause: TypeScript can't be sure whether or not you want this special behavior of preserving the "array-ness".
In the issue's example:
type StructTypeFor<Descriptor extends StructDescriptor> = {
[K in (keyof Descriptor) & number as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
when instantiating mapped type StructTypeFor with some array type, because StructTypeFor is not a homomorphic mapped type without an as clause, K will range over all properties of type (keyof Descriptor) & number, as there will be no special instantiation behavior.
The same goes for rewriting the mapped type like this:
type StructTypeFor<Descriptor extends StructDescriptor> = {
[K in keyof Descriptor as Descriptor[K][0]]: ValueTypeOf<Descriptor[K][1]>;
};
(see also https://github.com/microsoft/TypeScript/pull/58237#issuecomment-2064743496)
I agree with @gabritto 's reasoning - if this is solvable today, and the proposed fix would make things harder to understand, then the best thing to do is to stick with the current behavior. We can reevaluate in a new issue if unsolvable use cases appear.
This issue has been marked as "Working as Intended" and has seen no recent activity. It has been automatically closed for house-keeping purposes.