React-native-svg: Working with matrices to accumulate transformations

Created on 9 Apr 2020  ·  117Comments  ·  Source: react-native-svg/react-native-svg

Question

I have to draw a custom map supporting translations, rotations and zooming.
The transforms of every gesture is always additive in respect of the previous gestures.
From a math perspective this simply means multiplying the new transform and proceed to the next.

The transform style in React Native apparently does not support matrices (and this is very surprising to me). How can I work with matrices in react-native-svg in order to accumulate the transformations?

Thank you!

stale

Most helpful comment

Same way as in any other svg https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/transform#Matrix

  <rect x="10" y="10" width="30" height="20" fill="red"
        transform="matrix(3 1 -1 3 30 40)" />

I don't see the problem, you know how to multiply two matrices no?
https://en.wikipedia.org/wiki/Matrix_multiplication#Definition

All 117 comments

And yeah, you can use matrices with transforms in react-native-svg, and matrix multiplication follows the normal linear algebra, so just compute what ever product of matrices you want, and give it as a transform to e.g. a G element.

I already watched the video last week and also tried to use your zoomable-svg but the problem is that I need to accumulate all the transformations: scaling, rotation and translation.
The video shows that the transform is reset at the end of the gesture: definitely too easy because it never accumulates matrices.

How can I specify a matrix?

  • transformMatrix has been deprecated according to the docs
  • the transform: {[ matrix: [...] is apparently not working

Same way as in any other svg https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/transform#Matrix

  <rect x="10" y="10" width="30" height="20" fill="red"
        transform="matrix(3 1 -1 3 30 40)" />

I don't see the problem, you know how to multiply two matrices no?
https://en.wikipedia.org/wiki/Matrix_multiplication#Definition

Of course I know.
I did not find the transform="matrix(...)" syntax in the docs.
Is this supported for react-native-svg only or is it standard for all the react-native based libraries?

I would prefer to apply the transforms to the element whose child is the SVG. Is this a potential performance hit?

This only works on react-native-svg, as it comes from the svg spec. Would have to do profiling to check performance. Quite likely, natively animated/computed logic would be more performant than a plain js version.

and style={{transform: [{matrix: something}]}} should certainly work in both react-native and react-native-svg

Thanks. I am probably going to use animation only to smooth the movements, but I only need to apply the transforms to change the map view.

and style={{transform: [{matrix: something}]}} should certainly work in both react-native and react-native-svg

I was not successful in using it on a simple View element filled with a color. Will retry ...

Thank you!

There's also this happy chap i tried to help once, might be useful for you as well: https://github.com/react-native-community/react-native-svg/issues/1064

But, when I think about it, seems you don't need to accumulate any state at all, you just need to keep track of where the pointers were when the number of active gestures change, if it changes from 0 or 2 to one pointer, keep track of where it was when that happened, and on new gesture events, the distance of the pointer to that point is your translate transform. If it changes from 0 or 1 to 2, store the position of the two pointers, on new events, the change in distance between the two new positions and two initial ones, is your scale transform, the change in angle between them is your rotation, and the distance between the midpoints of the two pairs is your translation. Gestures / states in between should not have any effect / non-linear accumulation affecting the outcome, transform only depends on current pointer data and when the number of active pointers changed.

hmm, let's make an example:

  • you pinch the map, and it zooms at a specific point (the center of the pinch)
  • than you rotate it, with a different pinch point
  • now you pan it
    If you now want to zoom or rotate, you need to accumulate those transforms over the previous ones.
    The transforms style (without the matrix) just apply a fixed sequence of translation, rotation, scale.
    Either you keep the matrix and then try to extract the translation, rotation, scale from that matrix, or you will never be able to keep the gestures correctly.
    Since I don't skew, the matrix should be reversable, but never did it as keeping the matrix state is far simpler.
    Do you agree or am I losing something?
    I always worked this way in other technologies and it worked.

Ah, yes, when the gesture ends you should accumulate the transform for sure, was just thinking about individual gestures on top of the current state. So the initial transform is the identity, lets call it A, and then you multiply a transform that has the translate, scale, rotate and, offset needed for the just finished gesture, B = TSRO (translate, scale, rotate, offset), and make C = BA the new accumulated transform

Exactly.
The video you linked is about Instagram where the photo goes back to the original position after the gesture... that's too easy and I already can do it.
When you go to google maps instead, you accumulate the transformations, so a state is needed (starting, as you said in your last message from the identity matrix) by multiplying every transition (I typically order scale, then rotation and lastly translation).
BTW every time you pinch, you either have a primitive centering the rotation/zoom, otherwise you have to manually translate + rotate/zoom + translate back.

FYI I tried using reanimate and discovered (sadly after two days) they do not support matrices, so I can't work with their library.

Alternatively you can have four different transforms, one for each primitive, and accumulate those, will need to consider interactions between translations and scale-rotate in the gestures a bit more carefully then.

Since I don't need to skew, they should be reversable.
In this case I could maintain a matrix and then "extract" the separate t,r,s,o parameters. Never did that but in theory it should be possible.

But yeah, to clarify the native aspect, if you add animations, it'll probably just feel more disconnected from the gesture, you want to minimize the number of cycles from the gesture event being registered, to the final rendered output being visible on the screen. So you probably want to use react-native-gesture-handler, as that allows the processing to stay completely native, rather than doing a context switch to javascript, run event handler, change state, run react lifecycle, commit changes back to native and only then start rendering, rather than just computing the matrix transform and invalidating one View or Svg / G element... If it doesn't have the matrix support you seek, I recommend forking it and implementing the support yourself and making a pull request there. I can probably help on the way in case there's any questions on the native side.

This one use-case would probably deserve its own tailor-made performance / use-case optimized package, something like react-native-pan-zoom-rotate / react-native-zoomable perhaps, pull requests to zoomable-svg for a native mode would be welcome as well.

And yeah, if you have the transform as one matrix, you don't need to split it up, just start with the initial A and B matrix as identity, when the number of active pointers change, store the positions, for each event update B (or update the decomposed matrices / primitive transformations and multiply them together to get B), and when the number of active pointers change, accumulate B into A, and set B to identity again

So the structure would be something like this:

<Svg style={{transform: [{matrix: B}, {matrix: A}]}}>
  <Text>some content goes here</Text>
</Svg>

Or, equivalently

<Svg>
  <G style={{transform: [{matrix: B}]}}>
    <G style={{transform: [{matrix: A}]}}>
      <Text>some content goes here</Text>
    </G>
  </G>
</Svg>

I assume you're familiar with these, but just in case you want a refresher (they're also in the css / svg specs), here's the equivalent matrices to the primitive transforms:
https://github.com/react-native-community/react-native-svg/blob/ffa2e69c17ce02b21f393a5b57cdbef1c039fe3d/src/lib/extract/transform.peg#L1-L105

And the main state changes that would need to be implemented in native logic instead:
https://github.com/msand/zoomable-svg/blob/fe724c2652595bb6176731be96fde1151e30f21a/index.js#L420-L511

Also, the calculation for the rotation is missing there, so something like this:

const initialAngle = Math.atan2(initial_y1 - initial_y2, initial_x1 - initial_x2);
const rotate = Math.atan2(y1 - y2, x1 - x2) - initialAngle;

But yeah, to clarify the native aspect, if you add animations, it'll probably just feel more disconnected from the gesture, you want to minimize the number of cycles from the gesture event being registered

My wish is to use animation only to restore a position after the user makes a search. Probably it doesn't make sense (in my scenario) to use animation while the user is actively making a gesture.

I understood how the react-native-gesture-handler library works and it is a great idea (creating the AST for the desired transformations and generate the native code that uses the refs under the hood), but I don't know if I will have time to implement the fundamental matrix support.

perhaps, pull requests to zoomable-svg for a native mode would be welcome as well.

As soon as I come to a solution, I will be more than glad to either publish it or making a pull-request

Also, maybe these two can be useful for learning more about react-native transforms:
https://snack.expo.io/@msand/new-instagram-stories
https://snack.expo.io/@msand/rotate-cube

The instagram stories one has three alternative implementations, Stories2 requires a fork of react-native-reanimated I made https://github.com/software-mansion/react-native-reanimated/pull/538

Also, Stories1 in new-instagram-stories has a source code parameter / constant called "alt" with a bit different transformation

Talking about the transform using matrix over standard react-native elements and the Svg:

  • style={{transformMatrix: [1, 0, 50, 0, 1, 50]}} This doesn't work
  • style={{ transform: [ { matrix: [1, 0, 50, 0, 1, 50] } ] }} on a react-native element throws with "Error updating property 'transform' of a view managed by: RCTView"
  • style={{ transform: [ { matrix: [1, 0, 50, 0, 1, 50] } ] }} on Svg throws with "Invariant Violation: Matrix transform must have a length of 9 (2d) or 16 (3d)..."
  • style={{ transform: [ { matrix: [1, 0, 50, 0, 1, 40, 0, 0, 1] } ] }} on a reac-native element or also Svg throws with "Error updating property 'transform' of a view managed by: RCTView"

I didn't find even a single example showing how to use the matrix on react-native ... astonishing.

Of course I am able to use the SVG notation on inner elements of the Svg. The following works:

<Rect x="0" y="0" width="100" height="100" fill="red"  transform="matrix(1 0 0 1 50 50)" />

What is the syntax for matrix? Do you know any example of that?

P.S. I am still reading/working on the other posts you wrote.

Thank you

The error comes from here:
https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/StyleSheet/processTransform.js#L172-L182

There's some useful helpers in that file as well: https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/StyleSheet/processTransform.js#L19-L114

So, to use the normal react-native transform style property (with a list of transforms containing matrices), you need to give the matrices as arrays with either 9 (2d) or 16 (3d) numbers, i.e.

  style={{
    transform: [
      { translateX: tx },
      { translateY: ty },
      { scale: s },
      { rotate: r },
      { translateX: ox },
      { translateY: oy },
      { matrix: [ // 9 numbers doesn't seem to work
          1, 0, 0,
          0, 1, 0,
          0, 0, 1
        ]
      },
      { matrix: [ // seems to work
          1, 0, 0, 0,
          0, 1, 0, 0,
          0, 0, 1, 0,
          0, 0, 0, 1
        ]
      }
    ]
  }}

. For the svg standard syntax, you need to give a transform attribute as string instead (n.b. not a style property, but directly on the element instead, although we support it in the style props as well for simplicity, it's not required by the spec), i.e. transform="matrix(a b c d e f)" i.e. 6 numbers inside the parenthesis, or any sequence of svg transform primitives as a string.

Another supported syntax in react-native-svg elements is: giving an array of 6 numbers i.e. transform={[a, c, e, b, d, f]} as the transform attribute / style property, the same as the output of the svg transform string parser referred to in an earlier comment, instead of an array of react-native transform objects.

The only transform/matrix I didn't test was the 16 elements ... and of course it worked.
But (as I posted before) the one with 9 elements does not work and this is what cheated me.
Thank you

Oh, that's quite possible, not sure why, would have to set breakpoint in both the javascript, java and objective-c code of react-native and react-native-svg to double check. Might be that only the 16 works properly with the react-native syntax, to fit together with the other 3d transforms.

With the tailor-made module, I'm thinking it wouldn't depend on anything but react-native (at most react-native-svg as well, i.e. no reanimated, no react-native-gesture-handler), and would be a single View, accepting only a single child, and its only concern would be to handle any pan-zoom-rotate gestures on that child in native code, calculate the needed matrix, and set the new transform either on itself or on the child, using the ViewManagers directly:
https://github.com/facebook/react-native/blob/d0871d0a9a373e1d3ac35da46c85c0d0e793116d/React/Views/RCTViewManager.m#L169-L174

https://github.com/facebook/react-native/blob/f2d58483c2aec689d7065eb68766a5aec7c96e97/ReactAndroid/src/main/java/com/facebook/react/uimanager/BaseViewManager.java#L76-L84

https://github.com/react-native-community/react-native-svg/blob/ffa2e69c17ce02b21f393a5b57cdbef1c039fe3d/ios/ViewManagers/RNSVGNodeManager.m#L157-L164

https://github.com/react-native-community/react-native-svg/blob/ffa2e69c17ce02b21f393a5b57cdbef1c039fe3d/android/src/main/java/com/horcrux/svg/RenderableViewManager.java#L1211-L1225

And would be a simple wrapper:

import * as React from 'react';
import { View, Text } from 'react-native';
import PZR from 'react-native-pan-zoom-rotate';

export default () => (
  <PZR>
    <View>
      <Text>Gesture This</Text>
    </View>
  </PZR>
);

@wcandillon Have you made anything matching this exact api?

These can be used to create native modules:

create-react-native-module pan-zoom-rotate --view --generate-example --package-identifier io.seaber --github-account msand --author-name 'Mikael Sand' --author-email '[email protected]'

https://github.com/brodybits/create-react-native-module

and

npx @react-native-community/bob create react-native-pan-zoom-rotate

https://github.com/react-native-community/bob

bob comes preconfigured using kotlin for the native android code by default, while create-react-native-module uses plain old java

Oh, and create-react-native-module has a basic native View example when using the --view flag

I haven't followed this thread closely, so far it doesn't look like there is a matrix transformation that you cannot express using the transform API. I'm currently working on a complex zooming example with both a pan and a pinch gesture and I'm not encountering any specific issues.

Let me know if you have any specific question regarding that topic. Until I release more content on this theme, these are the two videos that can serve as a basis:
https://www.youtube.com/watch?v=0FVnzuyFNSE
https://www.youtube.com/watch?v=FZnhzkXOT0c

@msand not yet but if there is a cool example to implement, I'd love to do it. What might be the issue with rotate?

Well, at least it seems a bit tricky to do well / concisely with reanimated / gesture-handler, I think to add it to zoomable-svg just need something like these two lines and another transform for rotate:

const initialAngle = Math.atan2(initial_y1 - initial_y2, initial_x1 - initial_x2);
const rotate = Math.atan2(y1 - y2, x1 - x2) - initialAngle;

Thanks @msand and @wcandillon for your interesting links and information and still digging.
The issue I rised is about handling an Svg map.
While the Instagram gestures video (which I watched last week, great indeed) shows how to apply transforms that are reset at the end of the gesture, managing a map is quite different.
I am willing to create a general purpose viewer that is able to accumulate the gestures so that the user can continue moving, zooming and rotating the map after each gesture (kind of google maps but with Svg custom maps).

After familiarizing a bit with react-native-svg, I tried to get the best performance using reanimate, but I discovered they do not support matrix transformation. Then I asked in this thread which other possibilities I had.

BTW As a side note, during my tests I discovered that the translationX/Y of a pan operation obtained from the pan event is "delayed" (probably because Android needs some time to understand whether it is a pan or a pinch) and after realizing it is a pan, starts providing the translations but they are wrong). I could fix this by keeping trace of the initial point using the pinch values (when state is BEGAN) and then reculaculating manually translationX/Y and ignoring the ones provided by the PAN event.

Thanks!

@raffaeler not sure if it's related to you last mention, but this is how I detect if the pinch gesture began:

  const pinchBegan =
    Platform.OS === "ios"
      ? eq(pinchState, State.BEGAN)
      : eq(diff(pinchState), State.ACTIVE - State.BEGAN);

@wcandillon never worked on iOS, but I believe it is a different issue.
The problem I am referring to is more evident when you are working on an Emulator since you use the mouse that is more precise.
When you click on an pan-enabled object and start moving it, there is a small amount of time needed by Android to understand which kind of gesture is. If you just use translationX and translationY to compute the new position of the object, you will see it making a small jump at the beginning of the gesture.
If instead you manually compute translationX and translationY by getting pinchX and pinchY in the pan event during the BEGAN state and then calculate their deltas when the pan is ACTIVE, you will get the correct movements.

In order to understand the problem I used to draw a small circle centered a the pinchX, pinchY.

Based on the description, it looks like we are referring about the same thing. You should also beware that the pinch gesture could work with one finger (on iOS at least) so you need to filter for that as well using numberOfPointer

@msand @wcandillon Precious information :)
I love the @msand proposal to build a native control, but I have to go step by step and start from a pure react-native control first. I am still not that familiar to with the core stuff to know where to start from.

From the perf perspective, given I am not going to make heavy animations, transforming the view or the Svg will lead to different perf results?

Yes, if you transform the view, it'll reuse the bitmap which contains the vector render output, and thus it'll pixelate when you zoom in, it you transform in the Svg element or a G element (wrapped with Animated.createAnimatedComponent), it'll re-render the svg content and stay pixel perfect. Thus you should either tile bitmaps the way that some map viewers do (i.e. one layer of e.g. 256x256 tiles per power of two of scaling), or have a single bitmap which is at most as large as the native display, and transform the content using a single transform / G element. Transforming a bitmap (or even quite a few) using a matrix is very cheap. Re-rendering a complete svg tree, very much depends on the content.

I realised it only seems to get a bit more complex if one tries to keep both the A and B matrix decomposed / as separate primitive transforms, and even then, it just requires to decomposeMatrix(C), where C=BA, to get the new A when you want to set B to identity.

But if you don't decompose C (nor A), you can just substitute A with C and set all the decomposed / primitive transforms of B i.e. TSRO, to identity, whenever the type of gesture changes / ends, i.e. when the number of active touches / pointers change.

I.e. it gets a bit tricky from the fact that matrix multiplication in general (or even affine transforms more specifically) doesn't commute. So if you want to maintain a TSRO decomposition of both A and B, you need to take their product C, to do the change of basis due to two rotations with a translate (and a scale) in between, and decompose C to get a single rotation about a single scaled and translated origin, when you set the parts of B to identity / flatten the pan gesture offset.

And yeah, the physical intuition / constraint on state change here I aimed for in zoomable-svg, is that when you go from zero to one active pointers, you expect the point you started touching to stay in contact with the pointer as you move, when you go to two active pointers, you expect both points to stay in contact with their active pointers (thus the need for rotation as well), and if you then go back to one pointer, you expect that to stay in contact (i.e. only translation), and go back to translation+scaling+rotation if you then go back to two active pointers.

@msand I am currently building a sample test without using reanimated or react-native-gesture-handle in order to work directly with the matrix transform.

But even with the help of decomposition primitives, I could not work with those libraries because I would have to rewrite the decomposition using their primitives since the values used by those libraries never come back to javascript.

Do you agree or am I missing something?

I've barely touched react-native-gesture-handler, so I'm probably not the best person to ask about that. I think @wcandillon probably has several orders of magnitude more experience with that library than me, maybe he can answer this?

At least by forking zoomable-svg, or copying the code from index.js into your own project and modifying there, should allow using either the accumulated or decomposed approach, with plain react-native and react-native-svg. The PanResponder api should provide everything needed: https://github.com/msand/zoomable-svg/blob/fe724c2652595bb6176731be96fde1151e30f21a/index.js#L291-L328

@raffaeler I finally understand what you are trying to do, sorry that it took me so much time. I'm also trying to save the transformation matrix when the gesture end. First on the JS thread as a proof of concept (which I expect to be glitchy because there will be some latency between the time the gesture values are reset and the JS thread re-rendered the component).

Using offsets with animation values from gestures are convenient, they simulate what happens if the gesture would be continuous. Because of the pinch focal offset, the gesture cannot be treated as "continuous" anymore since it happens when the touch begin.
We agree that the generic solution to this problem is simply to keep a transformation matrix in the state (ideally in the UI thread and have its decomposition done in the UI thread as well). In case that turns out to be too complicated/impossible, I would try to spend some time some ad-hoc solution to the pinch focal continuity. I will keep you posted if I make progress, keep me posted on your side as well

I'd recommend to keep the accumulated state as a single matrix, and skip the decomposition of it. It's enough that the current delta to that is defined in decomposed form in the state, that makes it easy to calculate the final delta when a gesture ends / type changes, and then multiply that into the current accumulated matrix, and set the decomposed transforms to their identity elements.
So e.g.

<Svg>
  <G style={{transform: [
      { translateX: tx },
      { translateY: ty },
      { translateX: ox },
      { translateY: oy },
      { scale: scale },
      { rotate: radians },
      { translateX: -ox },
      { translateY: -oy },
]}}>
    <G style={{transform: [{matrix: accumulatedMatrix}]}}>
      <Text>some content goes here</Text>
    </G>
  </G>
</Svg>

And set tx, ty, ox, oy, radians to zero, and s to one, when you multiply BA = Tx * Ty * Ox * Oy * S * R * Ox^-1 * Oy^-1 * accumulatedMatrix = C and set the new accumulatedMatrix to C.

That makes sense. Do you have any thoughts on how to calculate the accumulatedMatrix? I tried with setting it in the JS thread but that creates a tiny glitch between the end of the gesture and the time it takes to re-render the component.

So, when you go from zero to one pointers, and move the pointer, only tx = x - initial_x and ty = y - initial_y change, and when you change from one to zero or two active pointers, then only the translation needs to be composed into the accumulated matrix.

When you have two active pointers, imagine a line connecting the two points when you enter the two active pointers state, lets consider the origin (ox, oy) of the gesture, i.e. the two points p1 = (x1, y1) and p2 = (x2, y2), the midpoint of them

const ox = (x1 + x2) / 2;
const oy = (y1 + y2) / 2;
const initial_radians = Math.atan2(y1 - y2, x1 - x2);
const initial_distance = calcDistance(x1, y1, x2, y2);

function calcDistance(x1, y1, x2, y2) {
  const dx = x1 - x2;
  const dy = y1 - y2;
  return Math.sqrt(dx * dx + dy * dy);
}

Then ox, oy, initial_radians and initial_distance are constant as long as the number of active pointers doesn't change

When either moves, you have

const tx = (x1 + x2) / 2 - ox;
const ty = (y1 + y2) / 2 - oy;
const scale = calcDistance(x1, y1, x2, y2) / initial_distance;
const radians = Math.atan2(y1 - y2, x1 - x2) - initial_radians;

And when you go from two to either one or zero active pointers, you can accumulate all transforms into a single matrix.

Oh, browser hadn't updated with latest comment when I replied. Might be that setNativeProps might help you a bit there, or some combination of reanimated and gesture-handler, but going completely tailor-made native will certainly be the way to resolve it optimally.

And in case this helps someone now or in the future, to calculate the accumulatedMatrix, convert the primitive transforms into matrix representation, and multiply them together, e.g. you have a chain with more than one matrix AB.. then just use multiply_matrices(A: Matrix, B: Matrix): Matrix or something similar, to make two matrices into one, until you only have one left, e.g.

const accumulatedMatrix = [A, B, C].reduce(multiply_matrices)

The order you do the composition / multiplications if you have more than two doesn't matter, e.g. ((AB)C)=(A(BC)), i.e., it's associative => reduceLeft = reduceRight and thus straightforward to compute individual compositions in parallel. But the order of e.g. translate T and rotate R matters TR != RT, i.e. it's noncommutative (order of operations matter)

In this specific case I guess it makes most sense to use the api provided by react-native itself, i.e.

createIdentityMatrix: function()
https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L20-L22

createTranslate2d: function(x, y)
https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L84-L88

createScale: function(factor)
https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L101-L105

createRotateZ: function(radians)
https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L156-L160

multiplyInto: function(out, a, b)
https://github.com/facebook/react-native/blob/0b9ea60b4fee8cacc36e7160e31b91fc114dbc0d/Libraries/Utilities/MatrixMath.js#L170-L223

@wcandillon :)
Now you can understand my huge surprise when I discovered that react-native-gesture-handler and reanimated do not support matrices.
I remember the Foley Van Dam book as one of the most important I ever read in my life. Graphics is all about matrices calculations.

As @msand well synthetized, the important thing is preserving the order. You start from the Identity as your state. During the gesture, you just build the B matrix with all the transformations coming from rotation, scale and translate, where rotation and scale also involve two translation each representing respectively the center of rotation and scale.
When the gesture finishes, you just multiply the state with B and obtain the new state while B is reset, of course, to the identity.
Luckily for us, as @msand wrote, you can keep state and B separate:

                transform: [
                    { matrix: [1.5, 0, 0, 0, 0, 1.5, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1] },
                    { matrix: [.5, 0, 0, 0, 0, .5, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1] }
                ]

This means you never have to multiply matrices together, but just keep the first as the previous state and the second one as the current ongoing gesture.

I am doing slow progresses both because I am also working on another project and because I started with react-native only last week and since I need to work with Typescript, I have some additional issue. For example I spent some time in understanding how to map the MatrixMath.js in my typescript project, but now it works very nicely :)

Will keep you updated!

Actually, if you use the midpoint of the two initial points when two pointers become active, you can use the same origin for both scale and rotation, as it'll be on the line connecting the points, it'll rotate correctly, and because it's in the middle, it'll scale the distance between the points correctly / evenly.

Actually, other way around, because it's on the line, scale is correct, because it's in the middle, rotation is correct.

Managed to confuse myself now even ;) Either way, midpoint should work as origin, when you place two pointers on a surface, the point in the middle should stay in the middle, even if you move the two points around.

A slightly more efficient (by half) approach is possible with large number of serial matrix multiplications, doing O(2^((log2 N) - 1)) instead of O(N - 1) operations, thanks to being associative, by reducing the amount of work by half for each high level step, i.e. by taking every pair of remaining compositions (modulo odd last one gets left unchanged) and reducing each pair to a single element each.

In this specific case, it's also possible to use the fact that affine 2d transforms only require six numbers, and that e.g. the origin offset and translation are additive, to reduce the number of atomic floating point unit operations that are needed to compute the accumulated matrix. Write out the actual algebraic expression for the computation, refactor out any reused parts, simplify if possible, and do the math without a single branch / jump operation, to maximise performance.

@msand Agree, I was aslo thinking to a dedicated library reducing the amount of sums/muls.

Do you have any idea on how to preserve the smoothing provided by Animated.event() when working with matrices?

I'd assume its just the flow of gesture events that makes it smooth, unless you're combining it with Animated.{decay, timing, spring, Easing} or with a ScrollView, in which case it's the decelerationrate / decay coefficient https://reactnative.dev/docs/scrollview#decelerationrate
https://reactnative.dev/docs/animated#decay

analytical spring model based on damped harmonic oscillation
https://reactnative.dev/docs/animated#spring

Easing + timing interpolation
https://reactnative.dev/docs/easing
https://reactnative.dev/docs/animated#timing

I'm certainly too overworked to think clearly, the structure of computing a binary tree of pairs only decreases the time required if there's more than one processor (modulo communication / sync overhead), emulating the parallel computation serially requires more operations than just doing reduction straight.

Making some tests, I am thinking that the best strategy (before going native) is:

  • keeping the Rotate, Zoom, Pan separate and joining them in a matrix at the end of the gesture so that I can calculate the new state before starting the following gesture
  • Rotate and zoom can probably be not smoothed with Animation.event while it is always worth to keep pan animated
    Will also try to animate them using parallel, but I am not that optimistic on low-end Android devices.

I've built an example using setState() which works well (not super clean, just as an experiment):

    cond(eq(state, State.END), [
          call(
            [pinch.x, pinch.y, origin.x, origin.y, scale],
            ([pinchX, pinchY, originX, originY, scale]) => {
              setTransform([
                ...transform,
                { translateX: pinchX },
                { translateY: pinchY },
                { translateX: originX },
                { translateY: originY },
                { scale },
                { translateX: -originX },
                { translateY: -originY }
              ]);
            }
          )
        ])   
// ...
       <Animated.Image
            style={[
              styles.image,
              {
                transform: [
                  ...transform,
                  ...translate(pinch),
                  ...transformOrigin(origin, {
                    scale
                  })
                ]
              }
            ]}
            source={require("./assets/zurich.jpg")}
          />

Here potential optimizations would be to use an accumulated matrix instead of recalculating the matrix everytime and use setNativeProps() maybe. While this would work quite well in practice (right?), I'm drawn by the challenge to do this only on the UI thread.

The React MatrixMath points to this pseudo algorithm: https://www.w3.org/TR/css-transforms-1/#decomposing-a-2d-matrix. While reanimated doesn't work with matrices, it would work with the decomposed form, so we could build the functions in reanimated to calculate the matrix and decompose them (might be lots of work). I'm also wondering if there are some shortcuts we could take since we are trying to do this for a specific transformation, we are not necessarily trying to solve the general case.

I think easiest might be to fork both reanimated and react-native-gesture-handler and either add the matrix support, or make a quick proof of concept of a tailor made api for this specific use case. I'm too busy with work atm to put much effort into it right now, but would be a fun thing to explore, and might do it to relax from work at some point.

An alternative api would be to add something declarative to reanimated for flattenOffset / accumulating transform matrices, the same thing applies there, take the list of current transforms, compose them, swap with the current accumulated one and set transforms / animated values to identity. Probably much easier to implement that in native logic, than implementing the matrix multiplication and decomposition logic using the reanimated syntax as is.

"might do it to relax from work at some point."
🤣

I agree, happy that we are on the same page. This is a great use case for the upcoming improvements in reanimated for instance. And for now, the tailor-made solution is definitely a fun puzzle (that might not be that hard actually).

Thank you for your support and I will keep you posted. These things are hard to leave at rest ;-)

Yeah, it can be quite stimulating to think about 😄 Nice to change focus of attention for awhile, with making something relatively well-specified, and useful pattern easier to achieve, with a reasonably short time to finish. Btw, seems reanimated might support matrices?
https://github.com/software-mansion/react-native-reanimated/pull/110#issuecomment-426084810

@raffaeler I suspect it's best not to do any extra animation in the gestures, and get to the final resting / rendered output asap. But when you search / want to show a location, some kind of fly to algorithm probably makes sense: https://github.com/mapbox/mapbox-gl-js/search?q=flyto&unscoped_q=flyto

https://github.com/mapbox/mapbox-gl-js/blob/8c7c88332ecf515555a308d38117440fa22be126/src/ui/camera.js#L869-L1084

    // This method implements an “optimal path” animation, as detailed in:
    //
    // Van Wijk, Jarke J.; Nuij, Wim A. A. “Smooth and efficient zooming and panning.” INFOVIS
    //   ’03. pp. 15–22. <https://www.win.tue.nl/~vanwijk/zoompan.pdf#page=5>.


 * @param {number} [options.curve=1.42] The zooming "curve" that will occur along the
 *     flight path. A high value maximizes zooming for an exaggerated animation, while a low
 *     value minimizes zooming for an effect closer to {@link Map#easeTo}. 1.42 is the average
 *     value selected by participants in the user study discussed in
 *     [van Wijk (2003)](https://www.win.tue.nl/~vanwijk/zoompan.pdf). A value of
 *     `Math.pow(6, 0.25)` would be equivalent to the root mean squared average velocity. A
 *     value of 1 would produce a circular motion.

I came up with another way to think about the transformation, essentially it's enough to define how much the initial coordinate system has been rotated, scaled and then translated. Because the two dimensions aren't scaled independently, and no skew is applied, it's enough to define how a single unit vector has been transformed, e.g. e unit vector from the origin (0, 0) to 1 unit in the x axis (1, 0).

I thinked about it, but the problem with reanimated is that you can't read the values at the end of the gesture. They are totally opaque (the __value._val can be read only in debug mode).
So, once you finish the first gesture, it is the end.

@msand @wcandillon
I finally did it ... the code is still a bit dirty, but at least the behavior is correct and the perf with a simple Svg are absolutely good ...I have to see what happens with complex drawings.
I ended up in keeping separate the transforms. BTW beforeCenter and afterCenter have equal values but opposite signs. On the gesture release, I calculate the matrixState and reset the other values to default.
This way, the gestures can be accumulated.

{ translateX: this.state.beforeCenter.x },
{ translateY: this.state.beforeCenter.y },

{ rotate: this.state.rotate },
{ scale: this.state.scale },

{ translateX: this.state.afterCenter.x },
{ translateY: this.state.afterCenter.y },

{ translateX: this.state.pan.x },
{ translateY: this.state.pan.y },

{ matrix: this.state.matrixState },

I have to keep those values twice: once as Animated and the others as raw numbers because I need to read the values to calculate the matrix at the end of the gesture:
```
interface GestureViewState {
beforeCenter: Animated.ValueXY;
rotate: Animated.Value;
scale: Animated.Value;
afterCenter: Animated.ValueXY;
pan: Animated.ValueXY;

valueBeforeCenter: IPoint,
valueRotate: number,
valueScale: number,
valueAfterCenter: IPoint,
valuePan: IPoint,

isPanOnly: boolean;

initialPoint?: IPoint;
initialPinch?: IPinch;

matrixState: number[];

```

Finally, I had to use a ref (that is squiggling in red in the editor, damn) to retrieve the view size using measure, otherwise everything is slightly out of offset. I don't know if there is a better way to have the size of the current client area, but relying on the window size is definitely wrong.

This test is still raw as I don't have enforced any constraint and still have to decide:

  • should I allow to rotate and scale together
  • should I allow to pan, rotate and scale together
  • what should I do if the number of touches change during the gesture
    I will look for some official best practice to make this behavior the most predictable by the user.

@raffaeler I tried a similar approach and it works well indeed. Now I am trying to stay on the UI thread.
I'm struggling with the matrix calculation. Considering the following transformation:

       transform: ([
                  { translateX: px },
                  { translateY: py },
                  { translateX: ox },
                  { translateY: oy },
                  { scale: s },
                  { translateX: -ox },
                  { translateY: -oy }
                ])

I'm excepting the matrix transformation via processTransform() to be:

| Result | | | |
| - |:-:| -:|-:|
| s | 0 | 0 | 0 |
| 0 | s | 0 | 0 |
| 0 | 0 | 1 | 0 |
| (px + ox) * s - ox | (py + oy) * s - oy | 0 | 1 |

The correct result seems to be (which also intuitively makes sense but I'm not able to get there via the matrix multiplication):

| Result | | | |
| - |:-:| -:|-:|
| s | 0 | 0 | 0 |
| 0 | s | 0 | 0 |
| 0 | 0 | 1 | 0 |
| px + ox - ox * s | py + oy - oy * s | 0 | 1 |

But this is not the result I'm getting. I am probably making a trivial mistake here right?

After that, decomposition gives us: translate(px + ox - ox * s, py + oy - oy * s) and scale(s). Which you can input as

[
   { translate: [px + ox - ox * s,  py + oy  - oy * s] },
   { scale: s }
]

What confuses me about the last part is that the order of translate and scale matters but it is not specified by the decomposition algorithm.

There are a few things there:

  1. Join the translate x, y and nearby translate sums otherwise internally there will be 4 matrix multiplications instead of one
  2. you have to translate back of the exact quantity (but different sign) as I wrote in my comment. You are not considering px and py when you go back to the original position

My (still not optimized) version of the transformations are:

        var temp = MatrixMath.createIdentityMatrix();
        var a1 = MatrixMath.createTranslate2d(this.state.valueBeforeCenter.x, this.state.valueBeforeCenter.y);
        var a2 = MatrixMath.createRotateZ(this.state.valueRotate);
        var a3 = MatrixMath.createScale(this.state.valueScale);
        var a4 = MatrixMath.createTranslate2d(
            this.state.valueAfterCenter.x + state.dx,
            this.state.valueAfterCenter.y + state.dy);

        MatrixMath.multiplyInto(temp, temp, a1);
        MatrixMath.multiplyInto(temp, temp, a2);
        MatrixMath.multiplyInto(temp, temp, a3);
        MatrixMath.multiplyInto(temp, temp, a4);
        MatrixMath.multiplyInto(temp, temp, this.state.matrixState);

where:

  • valueBeforeCenter is the positive transform (which include o and p in your case)
  • valueRotate and valueScale are of couse the rotation in radians and scaling
  • valueAfterCenter is exactly the same of valueBeforeCenter but with opposite sign
  • dx and dy are the final amount of the pan operation
  • matrixState is the previous state of my matrix which will be overwritten by temp.

HTH

@raffaeler the transform I wrote down correspond to the exact gesture/animation I am trying to achieve. I am surprised to see that the order of the transformations in your example is reversed with mine. 🤔Other than that, everything else looks identical.

Regardless, My goal is to try to go back to an translateOffset and scaleOffset values when releasing the gesture. So I first, I wanted to calculate the matrix by hand to make sure that I have some sort of a grip on what is going on and work my way back. However, I am not able to get to the same result by hand, is there any chance you could point me to the mistake I'm making when multiplying the matrices manually?

The order is important, you have to think reverse because conceptually you move the axis origin, not your drawing.

The result you posted has numbers in the last row, but they should be in the last column instead.
Try putting the symbols here (put parenthesis as well) and look at the result.

Yeah, at least I'd be used to having the translations in the last column as well, referring to a constant unit vector in a direction orthogonal to the other two or three ones. The decomposition seems to make sense otherwise, e.g.

([
                  { translateX: px },
                  { translateY: py },
                  { translateX: ox },
                  { translateY: oy },
                  { scale: s },
                  { translateX: -ox },
                  { translateY: -oy }
                ])
= Px Py Ox Oy S Ox^-1 Oy^-1
= P O S O^-1
= T S O^-1

     ╔═             ═╗   ╔═     ═╗   ╔═       ═╗   ╔═             ═╗   ╔═             ═╗
     ║ 1 0 (px + ox) ║   ║ s 0 0 ║   ║ 1 0 -ox ║   ║ 1 0 (px + ox) ║   ║ s 0 (-ox * s) ║
     ║ 0 1 (py + oy) ║ * ║ 0 s 0 ║ * ║ 0 1 -oy ║ = ║ 0 1 (py + oy) ║ * ║ 0 s (-oy * s) ║
     ║ 0 0     1     ║   ║ 0 0 1 ║   ║ 0 0  1  ║   ║ 0 0     1     ║   ║ 0 0     1     ║
     ╚═             ═╝   ╚═     ═╝   ╚═       ═╝   ╚═             ═╝   ╚═             ═╝

     ╔═                     ═╗
     ║ s 0 (px + ox -ox * s) ║
   = ║ 0 s (py + oy -oy * s) ║
     ║ 0 0         1         ║
     ╚═                     ═╝

Think of it this way, if you first apply translation and then scaling i.e. ST, then the offset gets scaled by that amount, if you first scale and then translate, i.e. TS, then you scale the space, and only then offset. Only difference is a scaling of the translation, in the case of a single pair of these two primitives.

In this specific case, to scale about some pinch center point, you first need to move that point to the origin, i.e. -ox and -oy = O^-1, then scale about that origin, and then move that origin such that it is in the position where it was on screen before the initial translation, by adding the offsets ox and oy = O

And with regards to what to do when the number of active pointers change. Consider it equivalent to the gesture completely ending, and a completely new one starting. Accumulate the state, and set the diff / delta to identity.

@msand 💯 and the proof of concept with using setState/setNativeProps for the accumulated matrix works well.
Now I'm trying to build something that doesn't involve the JS thread.

The transformation is:

{
  transform: [
    // accumulated transformation
     ...translate(offset),
    { scale: scaleOffset) },
   // transformation done by the gesture
    ...translate(tr),
     { scale }
  ]
}

First time you move the gesture, everything works beautifully, focus, translation, scaling. Now I'm trying to set offset to the correct value when the gesture ends.

cond(eq(state, State.END), [
  // store offset
  vec.set(offset, vec.add(offset, /* ...? */)),
  set(scaleOffset, multiply(scaleOffset, scale)),
  // reset values
  set(scale, 1),
  vec.set(tr, 0),
])

I have a few questions based on your comment.

  1. I'm not familiar with the O^-1 notation what does it mean?
  2. In your example you get to the result by multiplying in the "reverse" order (rtl), I couldn't find a reference of this in the processTransform implementation. Even though this is clearly what happens based on the results given by this function.
  3. I'm still confused about the decomposition algorithm, it returns for instance, translate and scale. How to do you know in which order would you need to apply these transformations?
  1. O^-1 is just ascii notation for exponentiation "^" of the matrix O using the scalar / real number "-1" as the exponent, and corresponds to the inverse element / inverted matrix such that I = O^1 * O^-1 = O^(1-1) = O^(0) = I, where I is the identity matrix

  2. As matrix multiplication is associative, it doesn't matter what order you do it, I did TSO = TM = F, where M is the intermediate matrix i wrote out and F is the final, but can also do TSO = NO = F, and it'll give the same result, ((AB)C) === (A(BC))

  3. It corresponds to a single matrix, and the matrix F gets multiplied with the vectors v such that the resulting vector v' = Fv
    so you can probably assume from this that the translation is applied independently from scaling / scaling has already been accounted for

     ╔═                     ═╗ ╔═ ═╗   ╔═      ═╗ ╔═ ═╗   ╔═ ═╗   ╔═        ═╗
     ║ s 0 (px + ox -ox * s) ║ ║ x ║   ║ s 0 tx ║ ║ x ║   ║ x'║   ║ s*x + tx ║
     ║ 0 s (py + oy -oy * s) ║ ║ y ║ = ║ 0 s ty ║ ║ y ║ = ║ y'║ = ║ s*y + ty ║
     ║ 0 0         1         ║ ║ 1 ║   ║ 0 0  1 ║ ║ 1 ║   ║ 1 ║   ║     1    ║
     ╚═                     ═╝ ╚═ ═╝   ╚═      ═╝ ╚═ ═╝   ╚═ ═╝   ╚═        ═╝

To clarify, that it's associative only means you don't have to write out parenthesis to write an unambiguous statement/expression/equation in the language of matrix multiplications. This is true for multiplication in the division algebras as well, except octonion, i.e. real, complex, and quaternion. Real and complex algebras commute, but quaternions loose that property similarly to matrices, and octonions aren't even associative, sedenions aren't even a division algebra, so can't even talk about an inverse / negative exponent. Quaternions are a good fit for 3d transforms, i.e. rotation, translation, scaling (and usable for 2d as well).

Also, it's fully possible that processTransform produces column-major 1d array representations of the matrices, in this case the way you wrote it out makes sense, just transposed https://en.wikipedia.org/wiki/Row-_and_column-major_order

And btw, if you don't consider change of number of active pointers as end/start of gesture, you get the issue i still haven't fixed in https://iws.nu/ http://infinitewhiteboard.com/
Try pinching and then releasing one of the fingers (either the first or the second one you put one the screen), and move around, and it'll feel awkward for sure ;)

Thank you @msand ♥️

Yes I noticed that with gesture handler, you definitely to check of the number of active pointer.
I'm getting close with the tailor made solution: https://www.dropbox.com/s/hdlb2mefk988dc5/t1.mp4?dl=0, the math is still not 100% correct as there are a lot of moving pieces and I need to check my code (this is why the accumulated matrix is so nice for such scenario)

And transposing an expression switches the order of operations, i.e. property 3
https://en.wikipedia.org/wiki/Transpose#Properties

{\displaystyle \left(\mathbf {AB} \right)^{operatorname {T} }=\mathbf {B} ^{operatorname {T} }\mathbf {A} ^{operatorname {T} }.}

@wcandillon @msand
This is my optimized function in typescript to create a matrix that includes all the possible transformations done at once:

// Creates a matrix equivalent to the multiplication of the following matrices:
// - translating the axis in the center of the the scale/rotation
// - rotate is in radians
// - scale (multiplier, therefore 1 does not scale)
// - translating back the axis to the origin (opposite sign of the initial translation)
// - final translation (pan occurring when dragging both fingers while rotating and/or scaling
// This matrix need to be multiplied for the previous state when accumulating gestures over time.
// For example:
// var temp = this.createRotateScaleMatrix(this.state.valueRotate, this.state.valueScale,
//                              this.state.valueAfterCenter, { x: state.dx, y: state.dy});
// MatrixMath.multiplyInto(temp, temp, this.state.matrixState);
//
// Equivalent code, computed using the MatrixMath support available in React Native:
// var temp = MatrixMath.createTranslate2d(center.x, center.y);
// var a2 = MatrixMath.createRotateZ(this.state.valueRotate);
// var a3 = MatrixMath.createScale(this.state.valueScale);
// var a4 = MatrixMath.createTranslate2d(-center.x + state.dx, -center.y + state.dy);
// MatrixMath.multiplyInto(temp, temp, a2);
// MatrixMath.multiplyInto(temp, temp, a3);
// MatrixMath.multiplyInto(temp, temp, a4);
// MatrixMath.multiplyInto(temp, temp, this.state.matrixState);
createRotateScaleMatrix(rotate: number, scale: number, center: IPoint, finalPan: IPoint) : number[] {
    var beforeX = -center.x;
    var beforeY = -center.y;
    var afterX = center.x;
    var afterY = center.y
    var cost = Math.cos(rotate);
    var sint = Math.sin(rotate);
    // The matrix goes by column, as expected by React Native
    var temp : number[] = new Array(16);
    temp[0] = cost*scale;
    temp[1] = sint*scale;
    temp[2] = 0;
    temp[3] = 0;

    temp[4] = -sint*scale
    temp[5] = cost*scale;
    temp[6] = 0;
    temp[7] = 0;

    temp[8] = 0;
    temp[9] = 0;
    temp[10] = 1;
    temp[11] = 0;

    temp[12] = beforeX + cost*scale*(afterX+finalPan.x) - sint*scale*(afterY+finalPan.y);
    temp[13] = beforeY + sint*scale*(afterX+finalPan.x) + cost*scale*(afterY+finalPan.y);
    temp[14] = 0;
    temp[15] = 1;
    return temp;
}

Nice 👏🏻
Meanwhile I've built a tailor made solution that doesn't run JS calls when the gesture ends: https://gist.github.com/wcandillon/6d1367528771ecd5257f5de655387c10
It seems to be working pretty, I'd love to have your feedback on that.

I didn't test it, but it looks very neat :)
BTW, you told that react-native-handler allow single matrix parameters to be Animated Values, right?
So I could migrate my sample to use react-native-handler as well

Do you know guys how to obtain the client size of a View/control withoutusing findNodeHandle and then measure? Is there a better way?

@raffaeler As far as I investigated, it is not possible. It looks like there is a PR opened for it but it wasn't merged. However, they are currently working hard on the next version of reanimated and the example you have built is a great use-case to motivate support of matrices in the next version.

These limitations (both in React Native and in Reanimated) really surprise me because they were always available in all the other UI technologies I ever used.
I am currently using findNodeHandle to find the exact coordinates of the center of the pinch. Neither window or screen can be used because they are out of offset (depending on the device of course).
Thank you anyway

@raffaeler As far as I know, they are working on an exciting new version. And indeed this seems to be the standard approach in other systems (Flutter for instance).

In my example, you can see how I adjust for the origin of the pinch, the default origin (when you zooming from the middle of the view) is simply add(CENTER, offset).

  const defaultOrigin = vec.add(CENTER, offset);
  const adjustedFocal = vec.sub(focal, defaultOrigin);

For simpler transformations, I do find the transform API from React Native much simpler and elegant than other commonly found transform APIs (just a matter of taste I guess).

Well, we will see... in certain cases you have to deal with the basic primitives for various reasons.
The different coordinate system in the Svg may require it in certain cases.

BTW, another thing to support is device rotation. This implies to multiply the state matrix in order to invert the coordinates. I am working on it.

The matrix for the device orientation implies to:

  • rotate by 90 degrees (cosT = 0, sinT = +1 or -1
  • translate by +Y or -Y where Y is the height of the client app area
    The signs of parameters depends on whether the rotation was clockwise or counter clockwise

How can I distinguish the direction of the rotation in React Native? @msand any idea?

Thanks to your tremendous support I was able to get this example out today: https://t.co/QPdJrqmZua?amp=1

@raffaeler I'd love your feedback on this to make sure I didn't overlook anything.

Thank you guys ♥️

Cool, but I would have underlined the power of OSS and collaboration derived from this thread.
When I talk in conferences, I often underline this awesome fact, because I want more people to participate in communities and in sharing code solutions.
And as a community leader, I often stress my local community on this.

I always do but this is not the final content I am working on. I wanted to get this video out as an intermediary step. Therefore this didn't come up yet.

@raffaeler I am now thinking that we could provide a utility function that corresponds to the original request you have made. Something that would look like:

const {translation, scale, rotate} = getAccumulatedTransform([
        ...translate(pinch),
        ...transformOrigin(origin, { scale })
]);
//...
vec.set(translationOffset, translation)
vec.set(scaleOffset, scale)
vec.set(rotateOffset, rotate)

What getAccumulatedTransform() does is the matrix multiplication/decomposition but done in Reanimated. That way, I would have to do this calculation manually like I did in the video.
What do you think?

@wcandillon If I understand well your proposal, the accumulated transform should include the "translate back" and the optional additional pan (translate) at the end of three transforms you already mentioned.
The code I posted in a comment above already computes it and it looks to me that the advantage of moving it to java (or objC) is tiny.

I am not familiar with the underlying engine of react native, but for example V8 compiles javascript to native assembler. When it comes to an easy algorithm based on floating points, this compilation is pretty efficient while other types of algorithms may suffer a lot.

The loss of performance during animations in standard react-native is mostly due to the transitions (posting json messages) to the native side and back rather than a bunch of sum and mul.

Going back to my initial question, the point was how to specify a matrix, not how to calculate it (sorry if I was not clear). It would be sufficient, IMO, that reanimated supported a type like ValueNumberArray allowing the entire matrix to be passed to the java native engine to be used as a transform.

Please let me know if I was not clear enough

Additional clarification

Since the computed matrix is used to represent the state of the previous gestures, it needs to be modified (i.e. communicated to java) only at the end of a gesture.
When a new gesture begins, the matrix is constant while the new parameters deriving from the current gesture are separately binded to reanimated (as it already works now).

Another useful thing in reanimated would be the ability to read the Values. This is needed because at the end of a gesture you need to compute the new matrix starting from the single reanimated Values representing translations, rotations and scaling.
The same problem occurs in plain react-native right now. In fact in my example I keep and update both the "Value" and plain numbers: the first ones are used in the JSX code while the numbers are used to compute the matrix at the end of the gesture.

I now have two new problems.

I start panning the view (either a picture or an Svg) on the right. Now I can make other transformations and it works.

1

But if I click on a portion of the screen that was outside the initial view, the gestures are totally ignored. This is true even when the initial drawing (Svg) was larger than the view and during the second gesture I click on the portion of the Svg that initially was not visible. @msand how can I continue to have gestures as the map was "infinite" (aka google maps like)?

2

When rotating the device, the transformations are already made by the react-native engine.
If I sequentially rotate the view, translate it, then rotate the device, at this point any further gesture has the incorrect center point. @wcandillon do you see this problem in your code?

Thank you

for 1., I always have as a children of the gesture handler a view that is
never moving (absoluteFill)

  1. would be an issue in my use case as well. I would create a vector for
    the screen dimension, listen to the dimension change and set the new values
    from the JS thread .setValue().

On Wed 15 Apr 2020 at 17:07, Raf (Raffaele Rialdi) notifications@github.com
wrote:

I now have two new problems.

I start panning the view (either a picture or an Svg) on the right. Now I
can make other transformations and it works.
1

But if I click on a portion of the screen that was outside the initial
view, the gestures are totally ignored. This is true even when the initial
drawing (Svg) was larger than the view and during the second gesture I
click on the portion of the Svg that initially was not visible. @msand
https://github.com/msand how can I continue to have gestures as the map
was "infinite" (aka google maps like)?
2

When rotating the device, the transformations are already made by the
react-native engine.
If I sequentially rotate the view, translate it, then rotate the device,
at this point any further gesture has the incorrect center point.
@wcandillon https://github.com/wcandillon do you see this problem in
your code?

Thank you


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/react-native-community/react-native-svg/issues/1342#issuecomment-614096294,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACKXVQUCC62EO63FB3OUELRMXEUNANCNFSM4ME5JAUA
.

Thank you @wcandillon

for 1., I always have as a children of the gesture handler a view that is
never moving (absoluteFill)

I still am not that familiar with react-native, but I understood what you mean, sounds perfect.

  1. would be an issue in my use case as well. I would create a vector for
    the screen dimension, listen to the dimension change and set the new values
    from the JS thread .setValue().

I currently am listening to DeviceEventEmitter.addListener('namedOrientationDidChange', ... to receive the orientation changes.
Then I printed on the console the view using UIManager.measure as well as the window and screen size retrieved using Dimensions. Apparently none of them are of help.

Initially I thought the responsible was the header bar on the screen but it is always present and same size both in portrait and landscape mode.

Since the device already reposition the origin to the upper left, there is no need to do it manually. But there is still a small offset that I don't understand.

The device rotation story is super-weird.
Once rotated, the view dimensions are the one of the bounding box that are returned by UIManager.measure pageX, pageY.
But I had no luck in trying to re-align the view. If the view is not aligned correctly, the state matrix becomes invalid.

I suspect react native has a bug in the layout system, because it rotates the view but it is not aligned the same way of the previous orientation. Sadly on other platforms this is far simpler.

Confirmed, this is a bug in react native layout system.
Upon device rotation, it uses the bounding box of the view to realign the view, which is totally wrong.
Now I patched this in my formulas and it works!!!

@raffaeler Would you like to close this issue and open another one regarding the layout bug?
I would be interested to see it in a small reproducible example.

Regarding the issue at hand, I'm adding support in redash for accumulatedTransform() which enables us to keep the state a of transform into animated values: https://github.com/wcandillon/react-native-redash/pull/224/files

@wcandillon I am already working on a repro but it is difficul to make it minimal.

For the other support I answered your last comment here.

A piece of feedback for @msand about react-native-svg.

I found some differences with the precious explanation you commented in this long thread:

  • When specifying the transform in style on the root Svg element, the rotate transform must append either "rad" or "deg". This is not required in all the standard react-native elements (where the default is rad). You may want to fix it to make it behave the same.
  • I could not make the transform attribute work on the Svg tag (I tried with both the array and the string with no commas). Those transformations works instead on G tags.

  • The transforms specified on the Svg tag (either in style or transform attributes) always use a bitmap scaling. This means that zooming becomes blurred. When I do the same transform on G it is zoomed by recalculating vectors.

Hey guys, good thread btw, learned a lot about svg and matrices here 😃

About @raffaeler last comment

The transforms specified on the Svg tag (either in style or transform attributes) always use a bitmap scaling. This means that zooming becomes blurred. When I do the same transform on G it is zoomed by recalculating vectors.

This is exactly my problem when I tried @wcandillon gist. The gist works on the Svg tag but the vector scales like an image and becomes blurred. Also tried in the G tag but I guess because the coordinate system changes the logic stops working. I mean the focal point logic seem not to have any effect, the vector just scales sideways.

@wcandillon I wonder if you tried your gist with an Svg? A pre made one with a specific viewBox.

Hi @mstrk,
I now have a viewer that works very well using transformations at G level. This viewer does support layers that are component I automatically generate from my original svg files (unfortunately SVGR does nor provide me enough power).
With regards to the "limits" (in zoom and panning), I enforce them by checking the bounding rectangle position and size obtained from getBBox and this also works very well. It is necessary because I support rotations as well.

At this moment I do not use reanimate or react-native-gesture-handler because the lack of support of matrices. This is a huge handicap for those libraries.
Anyway the performance (measured with react native profiler) is very good and fluid.
By enabling 3 layers the total layout time is 100ms (drawings have several hundreds path and lines).

My suggestion is to not specify width, height or viewbox in the Svg element and specify preserveAspectRatio="xMinYMin meet" instead.
I then use two nested G elements:

  • the first with the current transformations (the one during the gesture)
  • the other (nested) with the state transformations (the accumulated transformations from previous gestures).

As soon as I get a almost final version, I should be able to publish a component, but I need some time because I am in a hurry for the project I am developing.

HTH

Thanks for the help @raffaeler

After a lot of reading about transformations I end up by not doing any. Instead I calculate the aspect ratio that would fit the screen and set the width and height of the SVG directly.

I played around with @wcandillon gist and after some changes I made it work, but as I said above, I did not use any transforms and just change the top, left properties which seems to work very good but while testing in an android smartphone with 90hz of refresh rate I could notice it slightly slow. While looking into perf monitor we can see the UI tread dropping a lot, so maybe this is not the best way to go if changing size and position costs that much in the UI thread. Also tested in a Iphone 6plus and the results are very satisfying.

I can share the code with you guys if you are interested in this approach, it might need more polish and I definitely need to debug these huge drops in the UI thread.

How it looks in the simulator:

ezgif com-video-to-gif

@mstrk Good Job :)
I also need rotation therefore this approach would not work in my case.
Since SVG is strictly 2D, the matrices are just 3x3 and the normalized ones have only 6 parameters which keeps the amount of muls and sums very low.
As I said I have at least three layers overlapped and the perfs are very good.
In my case I also have the onPress handler causing a portion of the draw to be filled with a 'selected' style and this is also very responsive.

You can use the same approach for rotation, the formula is given by Wikipedia: https://en.wikipedia.org/wiki/Rotation_(mathematics) (it looks like the same formula I use to go from polar to cartesian). Overall, if you only have translations, scale, and rotations, it may be overkill to use the matrices.

Since React also does it's own transformation (you can test that some 2d transforms are different from sending the matrix from processTransform() directly). It might be cleaner to not use matrices for such use-case.

I don't get why @mstrk is getting slow perf.
My drawings are far more complex. I can count more than 300 path elements and the drawing is always smooth using matrices (as they are supposed to be according to the graphics bibles :) )
I suspect that acting on the svg attributes (viewbox, left, right, etc.) is more costly.
The only way to understand is using the perf measurement using the react-native Profiler.

BTW I suspect the perf of react-native-svg can be improved as I see the layout step spending the same amount of time when the entire drawing is visible in comparison to when the zoom cuts away most of the drawing. IMO there is space for improvement there.

@raffaeler that's correct. @mstrk is involving the UIManager on every frame which is substantial in terms of performance.

On your side, I can see how you are getting decent performance but you are crossing the React Native bridge in order to do matrix multiplication instead doing your own translate,rotate, scale, transform without involving the JS thread by using these formulas: https://github.com/wcandillon/react-native-redash/blob/master/packages/core/src/__tests__/Matrix.test.ts#L66

@wcandillon sure, I am aware I cross the boundary each time, but you also have to deal with some calculation during the gesture. I am not sure about the differences between the two strategies.
The matrices are GPU friendly and they are (probably) sent directly to the graphic processor.

The best performance would involve to leave the java code dealing with the entire set of transformations. While I now understood how to do it in Java, it is useless in my case as I don't see any glitches with my current solution.

BTW, I tried to stress the app by adding a huge amount of paths (more than 1000) and to discover the 'limits'. I could see no problem with gestures.
The only defect in this case was changing the fill color on one of those 1000 paths. Using the provilder I can see the react-native-svg redrawing everything when changing the color and this cause a bad perf hit. But, again, there no matrices here, just the fill color binding.

I can confirm that setting fill on a path or group tag gets a huge performance hit.

@msand I'm realize that skew is not supported on Android (even via the rotate/scale/rotate) decomposition and that there are also issues on iOS where the two view below:

// processTransform returns the same param on iOS and Android but here is set to return [{ matrix }]
import processTransform from "react-native/Libraries/Utilities/processTransform";
//...
     <View
          style={{
            width: 100,
            height: 100,
            backgroundColor: "red",
            opacity: 0.5,
            transform: [{ rotateZ: Math.PI / 3 }, { skewX: Math.PI / 3 }],
          }}
        />
        <View
          style={{
            width: 100,
            height: 100,
            backgroundColor: "cyan",
            opacity: 0.5,
            transform: processTransform([
              { rotateZ: `${Math.PI / 3}rad` },
              { skewX: `${Math.PI / 3}rad` },
            ]),
          }}
        />

produce two widely different results even though they should be identical.

Would I encounter the same issues using SVG transforms or it would be worth it for me to try to apply these transformations on SVG Elements?

AFAIK once you create a matrix that includes the skewing, it cannot be decomposed anymore. But I didn't check it's just what I remember from what I read ages ago.

Did you consider the possibility to apply a 3D rotation instead of skewing? You can do this only on the svg root element since SVG only supports 2D transformations.
The resulting effect should be similar.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You may also mark this issue as a "discussion" and I will leave this open.

Was this page helpful?
0 / 5 - 0 ratings