Apr 212014
 

First of all, remove xpra and cython if you had them installed:

aptitude purge xpra cython

Update your package lists, as we are going to install a lot of packages:

aptitude update

Prepare required prerequisites

Then follow the instructions on the xpra Wiki for building Ubuntu / Debian style:

apt-get install libx11-dev libxtst-dev libxcomposite-dev libxdamage-dev \ python-all-dev python-gobject-dev python-gtk2-dev

apt-get install xvfb xauth x11-xkb-utils
apt-get install libx264-dev libvpx-dev libswscale-dev libavcodec-dev

The file mentioned in the how-to, vpx.pc should exist:

cat /usr/lib/pkgconfig/vpx.pc

You will need to install and compile Cython from sources, as the version in the Raspbian repository is too old (0.15.1 vs. 0.16 minimum needed).

wget http://www.cython.org/release/Cython-0.20.1.tar.gz
tar -xzf Cython-0.20.1.tar.gz

change into the newly extracted directory. Install cython:

python setup.py install

This will take quite a while. Test that you have the correct cython version:

cython --version

should yield Cython version 0.20.1

Download and extract source

wget https://www.xpra.org/src/xpra-0.12.3.tar.bz2
tar -xjf xpra-0.12.3.tar.bz2

Note: there may be a newer package, check, please.

Change into the extracted directory. We need to apply a patch:

patch < patches/old-libav.patch

Enter xpra/codecs/dec_avcodec/decoder.pyx as the file to patch

Next patch (several files in one go):

patch < patches/old-libav-pixfmtconsts.patch

Simply copy and paste the “Index file” the patcher asks for, for example xpra/codecs/csc_swscale/colorspace_converter.pyx

Next patch (also several files):

patch < patches/old-libav-no0RGB.patch

Act like above (copy & paste file name, without leading / ).

It also contains a useful README, which tells you the next step is:

./setup.py install --home=install

After the compilation is done, you should either (always) set the Pythonpath to include the install subdirectory, like this:

export PYTHONPATH=$PWD/install/lib/python:$PYTHONPATH

or install the “finished” files to the appropriate targets. From the install directory do:

cp bin/* /usr/bin/.
cp -R lib/* /usr/lib/.
cp -R share/* /usr/share/.

xpra will now be the newest version:

xpra –version

xpra v0.12.3

You will still have to set the PYTHONPATH to the new files in /usr/lib/python, though:

The PYTHONPATH environment variable needs to be set:

export PYTHONPATH=/var/lib/python:$PYTHONPATH

 

Test & Test results

OK, here’s how to set up a test session:

Set up a test server, which has xpra installed (you can install it through the winswitch packages, will get you the newest xpra version on Ubuntu & Debian)

Start X Windows, open LXTerminal, run the following commands.

export PYTHONPATH=/var/lib/python:$PYTHONPATH

Start an xpra session via SSH (can be killed using Ctrl-C, and reconnected to using the same command):

xpra start ssh:maxcs@192.168.1.61:122 –start-child=xterm –encoding=h264

Read the manpage (man xpra) to have a look at some other options

Test results

xpra-raspberry-h264

rgb, png encodings are too high-latency.

jpeg is barely usable, even when resizing the application (for instance Abiword) to not full-screen usage.

webm encoding delivers worse quality, but seems a bit more usable

h264 decoding is NOT done in hardware in the default code (we’ll look into this). Surprisingly it is still the “most fluid to use” one.

I suspect that no decoding in H.264 is taking place, and server side xpra falls back to a different encoder (webm?) Anyways, one can even “watch” videos (a couple of frames each second with heavy artifacts) with this.

For very light administration / checking of remote contents, etc. xpra can be used as is. We will need to enable hardware decoding of h264, though, for it to yield real benefits.

Please note: our interests solely rest in streaming TO the Raspberry Pi, not FROM the Raspberry Pi – we will not test / patch in order to speed up administration of the Pi at this point.

 

Notes & Further reading

Dependencies of xpra package:

(you can show this using “apt-cache showpkg xpra” on a machine which has the package in the newer version, e.g. Ubuntu AMD64):

Dependencies:
0.12.3-1 – python2.7 (0 (null)) python (2 2.7.1-0ubuntu2) python (3 2.8) libavcodec53 (18 4:0.8-1~) libavcodec-extra-53 (2 4:0.8-1~) libavutil51 (18 4:0.8-1~) libavutil-extra-51 (2 4:0.8-1~) libc6 (2 2.14) libgtk2.0-0 (2 2.24.0) libswscale2 (18 4:0.8-1~) libswscale-extra-2 (2 4:0.8-1~) libvpx1 (2 1.0.0) libx11-6 (0 (null)) libx264-120 (0 (null)) libxcomposite1 (2 1:0.3-1) libxdamage1 (2 1:1.1) libxext6 (0 (null)) libxfixes3 (0 (null)) libxrandr2 (2 4.3) libxtst6 (0 (null)) python-gtk2 (0 (null)) x11-xserver-utils (0 (null)) xvfb (0 (null)) python-gtkglext1 (0 (null)) python-opengl (0 (null)) python-numpy (0 (null)) python-imaging (0 (null)) python-appindicator (0 (null)) openssh-server (0 (null)) python-pyopencl (0 (null)) pulseaudio (0 (null)) pulseaudio-utils (0 (null)) python-dbus (0 (null)) gstreamer0.10-plugins-base (0 (null)) gstreamer0.10-plugins-good (0 (null)) gstreamer0.10-plugins-ugly (0 (null)) python-gst0.10 (0 (null)) openssh-client (0 (null)) ssh-askpass (0 (null)) python-numeric (0 (null)) python-lz4 (0 (null)) keyboard-configuration (0 (null)) xpra:i386 (0 (null))

CheckInstall

Optional: install checkinstall, to create a package which you can easily remove or re-deploy to other computers:

aptitude install checkinstall

 

Troubleshooting

Patches

error: implicit declaration of function ‘avcodec_free_frame’

you need to apply the patch patches/old-libav.patch

error: ‘AV_PIX_FMT_YUV420P’ undeclared

you need to apply the patch patches/old-libav-pixfmtconsts.patch

error: ‘PIX_FMT_0RGB’ undeclared

you need to apply the patch patches/old-libav-no0RGB.patch

The other patches were NOT needed in my experimental compilation.

 

ImportError: No module named xpra.platform

Once you try to execute xpra (from LXTerminal preferably), you may get this message. The PYTHONPATH environment variable needs to be set:

export PYTHONPATH=/var/lib/python:$PYTHONPATH

Aug 022013
 

This is a work still in progress with unsatisfactory results (image quality, delay, very low frame rate), but here’s for the brave-hearted and those who are researching into the same direction:

Set up Windows streaming host

This can be a multi-monitor machine. Your left-most monitor will be streamed.

I generally use FullHD resolution for testing.

  • Install a Direct Show Screen Capture Filter for Windows. We used the direct show filter provided with “Screen Capturer Recorder” by Roger D Pack. Roger also includes an audio direct show capturer. And all free of charge – a real bargain 😉
  • Maybe a reboot is necessary here
  • Install latest version of ffmpeg from Zeranoe. Opt for the static builds (probably 64 bit if you are running a modern Windows 64 bit OS on a modern computer)
  • extract the download to a safe location
  • Open PowerShell, and navigate to the location

List the available screen filter devices:

This and all following shell commands are to be issued in the PowerShell. 

.\ffmpeg -list_devices true -f dshow -i dummy

This will show you the available input devices to capture from. My list looks like this, for instance:

 DirectShow video devices
  "Integrated Webcam"
  "screen-capture-recorder"
 DirectShow audio devices
  "Microphone (2- High Definition Audio Device)"
  "virtual-audio-capturer"

Start the stream:

.\ffmpeg -f dshow -i video="screen-capture-recorder" -vcodec libx264 -vprofile baseline -preset ultrafast -tune zerolatency  -pix_fmt yuv420p -b:v 400k -r 30  -threads 4  -fflags nobuffer -f rtp rtp://192.168.1.14:1234

I used PowerShell to start this, thus the .\ is needed in front of an application in the current folder.

  • libx264 is used as video codec, rather than mpeg4 (for superior quality – the Raspi is capable of H264 hardware decoding)
  • baseline profile needs to be used together with –pix_fmt yuv420p – this basically reduces the encoding to a simple subset of the full standard. Leaving out these two options led to the streaming not working, but you may be able to figure out something – please comment!
  • -preset ultrafast and –tune zerolatency both accelerate the video output. I have a latency of about 1 – 2 sec. in our lab here
  • -b:v 400k sets the target bitrate (as variable)
  • -r 30 this sets the framerate to 30
  • -threads 4 – give more threads to ffmpeg
  • -fflags nobuffer – should decrease latency even further. Not sure if it does, though.
  • -f rtp – specifies the output format. Here we use rtp, and stream it directly to the raspberry – which has the IP 192.168.1.14 on our network. You can choose whatever you like for the port, by an odd coincidence we chose 1234. Aliens?!?

Hit “Enter” and ffmpeg will start streaming. It will show you handy statistics – current frame number, framerate, quality, total size, total time, current bitrate, duplicated capture-frames, dropped capture-frames (i.e. the capturing rate does not align with the streaming rate). Do not worry too much about those for now.

Please note that you need some horsepower for capturing, encoding and streaming in real-time.

Set up Raspberry Pi

omxplayer can’t handle RTP streams directly – thus, we resort to GStreamer.

GStreamer 1.0 includes special support for the Raspberry Pi’s Broadcom SoC’s VideoCore IV hardware video functions (also known as OpenMax). Unfortunately, the Raspbian maintainers do not want to include it (yet), in order not to diverge too far from the official Debian repositories.

Luckily for you, though, someone has precompiled the binaries and set up a repository. See this thread for more background information, or simply follow my instructions:

sudo nano /etc/apt/sources.list

This will open nano to edit your package repository list. Please add the following line into this file:

deb http://vontaene.de/raspbian-updates/ . main

After saving the file (Ctrl + O, Ctrl + X), run the following commands:

sudo aptitude update
sudo aptitude install libgstreamer1.0-0-dbg gstreamer1.0-tools libgstreamer-plugins-base1.0-0 gstreamer1.0-plugins-good gstreamer1.0-plugins-bad-dbg gstreamer1.0-omx gstreamer1.0-alsa

This will install the necessary gstreamer1.0 & components.

Start the stream receiver & decoder chain:

gst-launch-1.0 -v udpsrc port=1234 caps='application/x-rtp,payload=(int)96,encoding-name=(string)H264' ! queue ! rtph264depay ! h264parse ! omxh264dec ! autovideosink sync=True

This can be done as user pi. Please note, that this may not be the perfect command to achieve playback, but it is a good starting point – as it works!

Gstreamer sets up “pipelines”, in which data is passed on in transformed state from step to step. While it seems to be quite a bit at the first look, it is very logical in itself, once you have figured it out.

  • we specify a UDP source (udpsrc), the port, and “caps”
  • Without the RTP caps, playback is not possible. Apparently they are not provided along with the stream? Thus, we have to specify the caps manually.
  • In the caps we specify some information for the pipeline
  • queue may be omitted, I am not sure what it does
  • rtph264depay – depayload h264 data from rtp stream
  • h264parse – parse h264 data
  • omxh264dec – decode the data with BroadCom OpenMAX hardware acceleration
  • autovideosink – put the result on the display
  • sync=True – I am not sure whether this does anything, or whether it is in the right place and form. It was an attempt to fix the gst_base_sink_is_too_late problems (but it did NOT fix them).

Issues

slow screen updates

These are very likely caused by a slow screen capture refresh rate, this may be better with a different screen capturer.

On Windows 8, with a pretty powerful Core i7 machine, I get possible fps 15.41 (negotiated for 30 fps). This is using Roger’s / betterlogic’s screen-capture-recorder. Roger claims this is due to Aero.

See more about it here  and here (also provides a list of available other directshow screen capture filters).

artifacts

Gstreamer shows massive H.264 artifacts – Matthias Bock has opened an issue for this, and some further hints.

This seems to be related to the bitrate set in FFMPEG – if I lower it to ~ 400 k, the artifacts become less distorted, and image quality is quite OK. Also, use a variable bitrate instead of a constant one.

gst_base_sink_is_too_late()

This may be related to the Pi’s fake hardware clock (?). It also appears when running gstreamer with a simple test image setup:

gst-launch-1.0 videotestsrc ! autovideosink

gstbasesink.c(2683): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstEglGlesSink:autovideosink0-actual-sink-eglgles:
There may be a timestamping problem, or this computer is too slow.

 

The command above will display a test video image.

Sound

I have not tried sound yet. Sound shoud be input into ffmpeg using the following arguments:

-i audio="virtual-audio-capturer":video="screen-capture-recorder"

This directly from Roger’s GitHUB documentation.

Ideas

  • try to use gstreamer on Windows for streaming?
  • Adjust Parameters for betterlogic/Roger’s direct show capturer
    • apparently it hits the ceiling at 15 fps with Aero on
  • Use a different direct show capturer
  • Tune quality for ffmpeg stream

Background info

  • H.264 is MPEG-4 Part 10 or = MPEG-4 AVC – and is the more modern and data-efficient codec format (“advanced video coding”);
  • whereas MPEG-4 Part 2 = MPEG-4 Visual is based on the older image compression standards used in MPEG-2, and also implemented in DivX, Xvid, etc.
  • you can also use .\ffplay –i udp://:1234 to test the streaming output on the local machine. The video quality IS NOT TO BE USED AS A REFERENCE. It just shows, that it “works”. Change the target IP accordingly (“localhost” instead of the Raspi’s IP will do, I believe.)

References

Optimization WordPress Plugins & Solutions by W3 EDGE