I've recently talked with Jeff Burtoft from Microsoft about the new Pointer proposal. In short, you needn't concern yourself with input type, e.i. mouse, touch ect. Just use addEventListener("MSPointerMove",... and you'll get all events.
But then he came up with this excellent explanation to how you would implement this and an example.
Having this mediator between the input and the behavior is a strike of genius. How many times have you been struggling with queued events? No more, just have the mediator handle the execution.As for the methods that are attached to those events, just as you do when you are using touchmove on iOS, you need to be cautions as to the actions you are trying to perform, since the do fire thousands of time a second. When I build an app that is tracking touches on a screen, instead of having the pointermove event fire a series of methods, I simply have it update an array of finger positions. I then go to my draw method (usually with requestanimationframe) and then animate based on the array.You can see based on the code example I used: http://touch.azurewebsites.
Actually this is a pattern I've implemented before but for some reason tend to forget. And I have developed code in production, that does something like:
foo.isAnimating = foo.isAnimating || false;
foo.isAnimating = true;
// do stuff..
Now, that I wrote it down, I just hope the mediator pattern will stick!