I am aware that an experimental package supports touch UI touch-ui-experiments on a limited set of devices.
We are wondering if it is possible to support user input via a standard USB-connected pointing device, such as a mouse or a touch frame/overlay superimposed over a standard monitor.
We have worked with some ELO touch monitors and presently with a standing totem with a vertical mounted 42" monitor and IR touch detector for digital signage. In such devices the touch information is conveyed to computers via a USB cable, using standard “mouse” protocols.
In fact we use some Raspberry Pi3 with Raspbian and everything works out of the box, pointing is like using a mouse, so usually no extra steps is required.
It would be great to use such devices, or any pointing device that honors the mouse protocol, inside Info-Beamer instead of dedicated HW.
Is this an acceptable feature request? We are more than willing to investigate (and invest) this further with the info-beamers develops.
That’s is already possible. The input handling code in the package service is only handling touch events, but there are already installations using ELO touch monitors. They generate mouse events and can be handled just like touch. A
device_event function should already show events generated by an attached ELO monitor. Let me know that works.
Ok. We will try it, unfortunately the ELO touch monitors are at our client’s right now, and we can only test with a standing totem with IR frame attached.
As far as we can tell it works just like a mouse in Raspbian. We will post the results briefly here.
I’m fairly confident that it’ll just work out of the box. As long as the ELO behaves like any other HID and is recognized as something in
/dev/input/, you should be good. Looking forward to your results.
Hello, I’m working together with Walter.
I debugged both Lua and Python and I noticed that in our case the input_state.down variable is always false, maybe it cannot recognize if the touch is down.
input_state is in Lua and the second half of the input handling code. It’s filled by sending UDP-packets to info-beamer which are then received and handled by
down is always
false there, it most likely means that no such packet has been sent.
The sending happens in the
service (see the
hosted.py sending code). This is the code that wraps sending UDP data to Lua. I suggest you uncomment the
print statement above it and look what low level input events arrive in Python. I’m pretty sure it’s something other than
BTN_TOUCH and therefore not getting recognized.
In the if statement of device_event function I added the event BTN_MOUSE and now the input_state.down becomes true when it should.
Now the problem is that the ui doesn’t react to the input
The package is more of a demo really and I only played around with it on a touch device. You might have to keep the button pressed down. Or maybe to coordinates are of? Can you check them to see if they correspond to the screen coordinates?
I see that the coordinates are wrong, thank you! I will explore more in these days
Interesting. I would have thought that they go from
0 -> NATIVE_WIDTH for
x, so 0-1920 (or maybe 1919?). What coordinates do you see? I might have to look how the Linux input system works in that case.
I see from almost (0, 0) to almost (32000, 32000)
I guess that makes sense. Thinking about it, I’m surprised the coordinates get clamped at all: I can indefinitely generate “moving the mouse to the left” events after all. I think a different approach similar to this older code (combined with something like this) might make more sense: Send relative movements only and then clamp them to the screen coordinates. Optionally even scaling them if you need different sensitivity.
Just to follow up: we have succeeded in making everything work as expected! Wow! Thanks for the support and thanks to @salvatoregiordanoo
You have already merged one pull request, which handles the mouse button down event that’s being emitted by our monitor.
We have made another merge request that basically adds some configurations to your touch package, one is for screen rotation and 4 parameters to setup offset and scaling to accomodate for different input resolutions and inner workings.
Then in LUA we handle the raw data from the pyhton service, and scale and offset them accordingly to map to the appropriate screen resolution/orientation.
Thanks for the wonderful support!
Update: we have managed to work with a proper mouse. The relevant code to change was in the python
service file, to accomodate for relative type input devices.
In particular the event to listen to was
event.type == ecodes.EV_REL and we expanded the state thus:
state = dict(
down = False,
x = 0,
y = 0,
Then we increment/decrement
hover_y for every relative movement (also clamping to the actual screen res) and pass along the new props to LUA.
Then in LUA, when there is a mouse down (or a mouse up after a mouse down) we copy
y in the new
input_state and everything keeps working.
As a bonus we draw (in debug) a red semitransparent square around the
hover_x,y coordinates in the render cycle, so as to see where the “pointer” is.
We made this configurable with a bunch of properties in the
config.json file, so the same code can work for absolute position devices (touch screens) and mouse enabled ones (the relative ones).