Configuration

To configure FPV, open the “FPV” menu at the top and select “Preferences”. A dialog pops up with a number of tabs.

Screen network

The first tab allows you to specify the network settings. The H.264 videostream must be transmitted using UDP multicast. The first two settings specify the multicast IP and port (which must equal the settings in the IP camera). The other two settings are used for receiving telemetry information. This information follows a compressed, specific data protocol as specified in the telemetry document.

The “jitter buffer” is a very small buffer in the rendering pipeline which allows the pipeline to deal with variances in the wifi latency. A setting of 40 to 60 is recommended, but if playback becomes jerky this value can be increased further at the expense of slightly higher overall latency.

Screen battery

The “Battery” settings allow you to specify what your battery setup looks like. The application uses this to calculate remaining flight time.

Screen flight

The “Flight” tab allows you to specify attributes if you’re using this for flight. The preferred altitude above home helps to indicate a ceiling for your flight. The stall angle of attack should be derived from the manufacturing details of your airplane, specifically the graph detailing the relationship between lift coefficient and angle of attack. Since this is not a full-scale but a model, the indication should only be considered a guideline. The stall speed is the minimum airspeed permissible, given the current weight of the airplane, which effectively is that airspeed where the airplane is at the maximum angle of attack and not changing altitude. The tilt angle for the camera is to compensate for the angle under which the camera is mounted on the airframe. Pitched forward is positive, pitched backwards is a negative angle.

Screen advanced

The advanced settings allow you to save the video stream and specify the aspect ratio and the field of view for your camera. The latter is needed to allow the virtual cues to appear in the right place in the final image. The horizontal field of view is usually listed in the lens or camera specifications. The aspect ratio is determined by dividing the width by the height in pixels.

Because not all cameras resend the necessary information about the video stream details, the specific settings for your camera and encoder setup need to be specified here. If you don’t provide the correct information here and the save option is checked, the rendering pipeline doesn’t work and you won’t see the video stream appear.

Saved video streams can be found in your Movies folder in the FPV subfolder.

The codec_data field allows the stream to be saved correctly so that another player application can open the video file at a later time and play its content. Unfortunately it is a little cumbersome to discover for your IP camera. To find the values that need to go into this tab, you can do the following:

  1. Turn off the IP cam. Ideally hook this up to the Mac directly in such a way that it gets an IP address on startup and starts transmitting.
  2. Open a terminal on the Mac and change directory into “/Applications/FPV.app/Contents/Frameworks/GStreamer.framework/Commands”
  3. Now run the following command line, inserting a valid location for the “filesink” component. The values for multicast-group, port are based on the default values of this application. If you changed those, these need to be modified here as well. After you copy this, press “Enter” to start the command.
  4. /gst-launch-0.10 -ve udpsrc multicast-group=239.255.12.12 auto-multicast=true port=5004 typefind=true ! “application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(strg)H264,payload=(int)96″ ! gstrtpjitterbuffer latency=60 ! rtph264depay ! “video/x-h264,stream-format=(string)byte-stream,alignment=(string)nal” ! h264parse config-interval=5 ! “video/x-h264, stream-format=(string)avc,alignment=(string)au” ! mp4mux ! filesink location=/Users/<your-login-name/location-of-file.mp4

  5.  Only now turn on the IP camera. As soon as it starts transmitting, the console where the command was started starts outputting messages that look like this:
  6. /GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, stream-format=(string)avc, alignment=(string)au, width=(int)1280, height=(int)720, framerate=(fraction)25/1, parsed=(boolean)true, pixel-aspect-ratio=(fraction)1/1, codec_data=(buffer)014d401fffe10017674d401fdb014016ec0440000003004000000ca3c60cb801000468eaccb2

  7. These messages contain the values that must be used to populate the fields in the screen above. Do not include the “(buffer)” or “(int)” information, only insert the values (here: 1280, 720, 014d401fffe10017674d401fdb014016ec0440000003004000000ca3c60cb801000468eaccb2 ).

Note: If the camera is already running before you start the command line, the information related to decoding will already have passed through. This is why the camera must only be started after the command line has been prepared.

Comments are closed.