Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (26)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

Sur d’autres sites (3586)

  • Summary Video Accessibility Talk

    1er janvier 2014, par silvia

    I’ve just got off a call to the UK Digital TV Group, for which I gave a talk on HTML5 video accessibility (slides best viewed in Google Chrome).

    The slide provide a high-level summary of the accessibility features that we’ve developed in the W3C for HTML5, including :

    • Subtitles & Captions with WebVTT and the track element
    • Video Descriptions with WebVTT, the track element and speech synthesis
    • Chapters with WebVTT for semantic navigation
    • Audio Descriptions through synchronising an audio track with a video
    • Sign Language video synchronized with a main video

    I received some excellent questions.

    The obvious one was about why WebVTT and not TTML. While for anyone who has tried to implement TTML support, the advantages of WebVTT should be clear, for some the decision of the browsers to go with WebVTT still seems to be bothersome. The advantages of CSS over XSL-FO in a browser-context are obvious, but not as much outside browsers. So, the simplicity of WebVTT and the clear integration with HTML have to speak for themselves. Conversion between TTML and WebVTT was a feature that was being asked for.

    I received a question about how to support ducking (reduce the volume of the main audio track) when using video descriptions. My reply was to either use video descriptions with WebVTT and do ducking during the times that a cue is active, or when using audio descriptions (i.e. actual audio tracks) to add an additional WebVTT file of kind=metadata to mark the intervals in which to do ducking. In both cases some JavaScript will be necessary.

    I received another question about how to do clean audio, which I had almost forgotten was a requirement from our earlier media accessibility document. “Clean audio” consists of isolating the audio channel containing the spoken dialog and important non-speech information that can then be amplified or otherwise modified, while other channels containing music or ambient sounds are attenuated. I suggested using the mediagroup attribute to provide a main video element (without an audio track) and then the other channels as parallel audio tracks that can be turned on and off and attenuated individually. There is some JavaScript coding involved on top of the APIs that we have defined in HTML, but it can be implemented in browsers that support the mediagroup attribute.

    Another question was about the possibilities to extend the list of @kind attribute values. I explained that right now we have a proposal for a new text track kind=”forced” so as to provide forced subtitles for sections of video with foreign language. These would be on when no other subtitle or caption tracks are activated. I also explained that if there is a need for application-specific text tracks, the kind=”metadata” would be the correct choice.

    I received some further questions, in particular about how to apply styling to captions (e.g. color changes to text) and about how closely the browser are able to keep synchronization across multiple media elements. The earlier was easily answered with the ::cue pseudo-element, but the latter is a quality of implementation feature, so I had to defer to individual browsers.

    Overall it was a good exercise to summarize the current state of HTML5 video accessibility and I was excited to show off support in Chrome for all the features that we designed into the standard.

  • JavaCV record video in Android

    11 janvier 2017, par wyx

    I want to record video quiet and without preview in Android. So I choice MediaRecorder but I could record only without preview but what make me crazy is that when MediaRecorder start or stop it will with a sound dee.... I try many methods about that . But I think it perhaps sth related to the OS of the mobile. So I try JavaCV because I also want to have a Live function in my app.

    But JavaCV spent me to too much time to solve some strange problems because it’s my first time to do sth about C++ src and video.

    Just compile group: 'org.bytedeco', name: 'javacv-platform', version: '1.3' as the README.md ,I even can’t build my apk.

    Error:Execution failed for task ':app:transformResourcesWithMergeJavaResForDebug'.
    > com.android.build.api.transform.TransformException: com.android.builder.packaging.DuplicateFileException: Duplicate files copied in APK org/bytedeco/javacpp/macosx-x86_64/libusb-1.0.dylib
       File1: /Users/wyx/.gradle/caches/modules-2/files-2.1/org.bytedeco.javacpp-presets/libfreenect/0.5.3-1.3/736d65a3ef042258429d8e7742128c411806b432/libfreenect-0.5.3-1.3-macosx-x86_64.jar
       File2: /Users/wyx/.gradle/caches/modules-2/files-2.1/org.bytedeco.javacpp-presets/libdc1394/2.2.4-1.3/f1498dacc46162ab68faeb8d66cf02b96fe41c61/libdc1394-2.2.4-1.3-macosx-x86_64.jar

    And then I modified it according this issuse
    use this to repalce. It can build the apk. But the can’t run.

     android {
      ..............
       packagingOptions {
           exclude 'META-INF/services/javax.annotation.processing.Processor'
           pickFirst  'META-INF/maven/org.bytedeco.javacpp-presets/opencv/pom.properties'
           pickFirst  'META-INF/maven/org.bytedeco.javacpp-presets/opencv/pom.xml'
           pickFirst  'META-INF/maven/org.bytedeco.javacpp-presets/ffmpeg/pom.properties'
           pickFirst  'META-INF/maven/org.bytedeco.javacpp-presets/ffmpeg/pom.xml'
       }
    }

    dependencies {
       compile group: 'org.bytedeco', name: 'javacv', version: '1.3'
       compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '3.2.1-1.3', classifier: 'android-x86'
       compile group: 'org.bytedeco.javacpp-presets', name: 'ffmpeg', version: '3.2.1-1.3', classifier: 'android-arm'
       compile group: 'org.bytedeco.javacpp-presets', name: 'opencv', version: '3.1.0-1.3', classifier: 'android-x86'
       compile group: 'org.bytedeco.javacpp-presets', name: 'opencv', version: '3.1.0-1.3', classifier: 'android-arm'
    }

    My demo code VideoService which will invoke in MainActivity

    package com.fs.fs.api;

    import com.fs.fs.App;
    import com.fs.fs.utils.DateUtils;
    import com.fs.fs.utils.FileUtils;

    import org.bytedeco.javacpp.avcodec;
    import org.bytedeco.javacv.FFmpegFrameRecorder;
    import org.bytedeco.javacv.FrameRecorder;

    import java.util.Date;

    /**
    * Created by wyx on 2017/1/11.
    */
    public class VideoService {
       private FFmpegFrameRecorder mFrameRecorder;
       private String path;

       private VideoService() {
       }

       private static class SingletonHolder {
           private static final VideoService INSTANCE = new VideoService();
       }

       public static VideoService getInstance() {
           return SingletonHolder.INSTANCE;
       }

       public void startRecordVideo() {
           String fileName = String.format("%s.%s", DateUtils.date2String(new Date(), "yyyyMMdd_HHmmss"), "mp4");
           path = FileUtils.getExternalFullPath(App.getInstance(), fileName);
           mFrameRecorder = new FFmpegFrameRecorder(path, 640, 480, 1);
           mFrameRecorder.setVideoCodec(avcodec.AV_CODEC_ID_H264);
           mFrameRecorder.setVideoOption("tune", "zerolatency");
           mFrameRecorder.setVideoOption("preset", "ultrafast");
           mFrameRecorder.setVideoOption("crf", "28");
           mFrameRecorder.setVideoBitrate(300 * 1000);
           mFrameRecorder.setFormat("mp4");

           mFrameRecorder.setFrameRate(30);
           mFrameRecorder.setAudioOption("crf", "0");
           mFrameRecorder.setSampleRate(48 * 1000);
           mFrameRecorder.setAudioBitrate(960 * 1000);
           mFrameRecorder.setAudioCodec(avcodec.AV_CODEC_ID_AAC);
           try {
               mFrameRecorder.start();
           } catch (FrameRecorder.Exception e) {
               e.printStackTrace();
           }
       }

       public void stop() {
           if (mFrameRecorder != null) {
               try {
                   mFrameRecorder.stop();
                   mFrameRecorder.release();
               } catch (FrameRecorder.Exception e) {
                   e.printStackTrace();
               }
               mFrameRecorder = null;
           }
       }

    }

    MainActivity

    package com.fs.fs.activity;

    import android.app.Activity;
    import android.os.Bundle;

    import com.fs.fs.R;
    import com.fs.fs.api.VideoService;

    import static java.lang.Thread.sleep;


    public class MainActivity extends Activity {

       @Override
       protected void onCreate(Bundle savedInstanceState) {
           super.onCreate(savedInstanceState);
           setContentView(R.layout.activity_main);


           VideoService.getInstance().startRecordVideo();
           try {
               sleep(10 * 1000);
           } catch (InterruptedException e) {
               e.printStackTrace();
           }
           VideoService.getInstance().stop();
       }
    }

    Error which make me want to cry.

    E/AndroidRuntime: FATAL EXCEPTION: main
                     Process: com.fs.fs, PID: 30259
                     java.lang.NoClassDefFoundError: java.lang.ClassNotFoundException: org.bytedeco.javacpp.avutil
                         at org.bytedeco.javacpp.Loader.load(Loader.java:590)
                         at org.bytedeco.javacpp.Loader.load(Loader.java:530)
                         at org.bytedeco.javacpp.avcodec$AVPacket.<clinit>(avcodec.java:1694)
                         at org.bytedeco.javacv.FFmpegFrameRecorder.<init>(FFmpegFrameRecorder.java:149)
                         at com.fs.fs.api.VideoService.startRecordVideo(VideoService.java:34)
                         at com.fs.fs.activity.MainActivity.onCreate(MainActivity.java:75)
                         at android.app.Activity.performCreate(Activity.java:5304)
                         at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1090)
                         at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2245)
                         at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2331)
                         at android.app.ActivityThread.access$1000(ActivityThread.java:143)
                         at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1244)
                         at android.os.Handler.dispatchMessage(Handler.java:102)
                         at android.os.Looper.loop(Looper.java:136)
                         at android.app.ActivityThread.main(ActivityThread.java:5291)
                         at java.lang.reflect.Method.invokeNative(Native Method)
                         at java.lang.reflect.Method.invoke(Method.java:515)
                         at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:849)
                         at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665)
                         at dalvik.system.NativeStart.main(Native Method)
                      Caused by: java.lang.ClassNotFoundException: org.bytedeco.javacpp.avutil
                         at java.lang.Class.classForName(Native Method)
                         at java.lang.Class.forName(Class.java:251)
                         at org.bytedeco.javacpp.Loader.load(Loader.java:585)
                         at org.bytedeco.javacpp.Loader.load(Loader.java:530) 
                         at org.bytedeco.javacpp.avcodec$AVPacket.<clinit>(avcodec.java:1694) 
                         at org.bytedeco.javacv.FFmpegFrameRecorder.<init>(FFmpegFrameRecorder.java:149) 
                         at com.fs.fs.api.VideoService.startRecordVideo(VideoService.java:34) 
                         at com.fs.fs.activity.MainActivity.onCreate(MainActivity.java:75) 
                         at android.app.Activity.performCreate(Activity.java:5304) 
                         at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1090) 
                         at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2245) 
                         at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2331) 
                         at android.app.ActivityThread.access$1000(ActivityThread.java:143) 
                         at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1244) 
                         at android.os.Handler.dispatchMessage(Handler.java:102) 
                         at android.os.Looper.loop(Looper.java:136) 
                         at android.app.ActivityThread.main(ActivityThread.java:5291) 
                         at java.lang.reflect.Method.invokeNative(Native Method) 
                         at java.lang.reflect.Method.invoke(Method.java:515) 
                         at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:849) 
                         at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 
                         at dalvik.system.NativeStart.main(Native Method) 
                      Caused by: java.lang.NoClassDefFoundError: org/bytedeco/javacpp/avutil
                         at java.lang.Class.classForName(Native Method) 
                         at java.lang.Class.forName(Class.java:251) 
                         at org.bytedeco.javacpp.Loader.load(Loader.java:585) 
                         at org.bytedeco.javacpp.Loader.load(Loader.java:530) 
                         at org.bytedeco.javacpp.avcodec$AVPacket.<clinit>(avcodec.java:1694) 
                         at org.bytedeco.javacv.FFmpegFrameRecorder.<init>(FFmpegFrameRecorder.java:149) 
                         at com.fs.fs.api.VideoService.startRecordVideo(VideoService.java:34) 
                         at com.fs.fs.activity.MainActivity.onCreate(MainActivity.java:75) 
                         at android.app.Activity.performCreate(Activity.java:5304) 
                         at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1090) 
                         at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2245) 
                         at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2331) 
                         at android.app.ActivityThread.access$1000(ActivityThread.java:143) 
                         at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1244) 
                         at android.os.Handler.dispatchMessage(Handler.java:102) 
                         at android.os.Looper.loop(Looper.java:136) 
                         at android.app.ActivityThread.main(ActivityThread.java:5291) 
                         at java.lang.reflect.Method.invokeNative(Native Method) 
                         at java.lang.reflect.Method.invoke(Method.java:515) 
                         at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:849) 
                         at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 
                         at dalvik.system.NativeStart.main(Native Method) 
                      Caused by: java.lang.ClassNotFoundException: Didn't find class "org.bytedeco.javacpp.avutil" on path: DexPathList[[zip file "/data/app/com.fs.fs-2.apk"],nativeLibraryDirectories=[/data/app-lib/com.fs.fs-2, /vendor/lib, /system/lib, /data/datalib]]
                         at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:56)
                         at java.lang.ClassLoader.loadClass(ClassLoader.java:497)
                         at java.lang.ClassLoader.loadClass(ClassLoader.java:457)
                         at java.lang.Class.classForName(Native Method) 
                         at java.lang.Class.forName(Class.java:251) 
                         at org.bytedeco.javacpp.Loader.load(Loader.java:585) 
                         at org.bytedeco.javacpp.Loader.load(Loader.java:530) 
                         at org.bytedeco.javacpp.avcodec$AVPacket.<clinit>(avcodec.java:1694) 
                         at org.bytedeco.javacv.FFmpegFrameRecorder.<init>(FFmpegFrameRecorder.java:149) 
                         at com.fs.fs.api.VideoService.startRecordVideo(VideoService.java:34) 
                         at com.fs.fs.activity.MainActivity.onCreate(MainActivity.java:75) 
                         at android.app.Activity.performCreate(Activity.java:5304) 
                         at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1090) 
                         at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2245) 
                         at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2331) 
                         at android.app.ActivityThread.access$1000(ActivityThread.java:143) 
                         at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1244) 
                         at android.os.Handler.dispatchMessage(Handler.java:102) 
                         at android.os.Looper.loop(Looper.java:136) 
                         at android.app.ActivityThread.main(ActivityThread.java:5291) 
                         at java.lang.reflect.Method.invokeNative(Native Method) 
                         at java.lang.reflect.Method.invoke(Method.java:515) 
                         at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:849) 
                         at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 
                         at dalvik.system.NativeStart.main(Native Method) 
    </init></clinit></init></clinit></init></clinit></init></clinit>

    So I want to know a comfortable method to achieve my goal : recode video quiet and without preview. And Live real time ?

    I found ffmpeg4android is a prefect library to run ffmpeg command. I just use it to compress videos from MediaRecorder But I don’t how to do use it to achieve my goal.

  • ADD Image overlay to ffmpeg video stream

    1er juillet 2017, par Chris

    I am new to ffmpeg and want to add an HUD to the video stream, so a few questions.

    1. What file do I need to edit.
    2. What do I need to do to achieve this.

    Thanks in advance. Also I am VERY new to all of this, I will need instructions step by step

    I saw other questions saying to add this : ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4

    But I dont know where to put it, i entered it in the terminal and got this :

    pi@raspberrypi:~ $ ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
    ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.9.2 (Raspbian 4.9.2-10)
     configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
     libavutil      55. 63.100 / 55. 63.100
     libavcodec     57. 96.101 / 57. 96.101
     libavformat    57. 72.101 / 57. 72.101
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 90.100 /  6. 90.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    video.mp4: No such file or directory

    I dont understand what i am supposed to do with the video.mp4 ?

    HERE IS THE SCRIPT THAT SENDS THE VIDEO.

    import subprocess
    import shlex
    import re
    import os
    import time
    import urllib2
    import platform
    import json
    import sys
    import base64
    import random


    import argparse

    parser = argparse.ArgumentParser(description='robot control')
    parser.add_argument('camera_id')
    parser.add_argument('video_device_number', default=0, type=int)
    parser.add_argument('--kbps', default=450, type=int)
    parser.add_argument('--brightness', default=75, type=int, help='camera brightness')
    parser.add_argument('--contrast', default=75, type=int, help='camera contrast')
    parser.add_argument('--saturation', default=15, type=int, help='camera saturation')
    parser.add_argument('--rotate180', default=False, type=bool, help='rotate image 180 degrees')
    parser.add_argument('--env', default="prod")



    args = parser.parse_args()



    server = "runmyrobot.com"
    #server = "52.52.213.92"


    from socketIO_client import SocketIO, LoggingNamespace

    # enable raspicam driver in case a raspicam is being used
    os.system("sudo modprobe bcm2835-v4l2")


    if args.env == "dev":
       print "using dev port 8122"
       port = 8122
    elif args.env == "prod":
       print "using prod port 8022"
       port = 8022
    else:
       print "invalid environment"
       sys.exit(0)


    print "initializing socket io"
    print "server:", server
    print "port:", port
    socketIO = SocketIO(server, port, LoggingNamespace)
    print "finished initializing socket io"

    #ffmpeg -f qtkit -i 0 -f mpeg1video -b 400k -r 30 -s 320x240 http://52.8.81.124:8082/hello/320/240/


    def onHandleCameraCommand(*args):
       #thread.start_new_thread(handle_command, args)
       print args


    socketIO.on('command_to_camera', onHandleCameraCommand)


    def onHandleTakeSnapshotCommand(*args):
       print "taking snapshot"
       inputDeviceID = streamProcessDict['device_answer']
       snapShot(platform.system(), inputDeviceID)
       with open ("snapshot.jpg", 'rb') as f:
           data = f.read()
       print "emit"

       socketIO.emit('snapshot', {'image':base64.b64encode(data)})

    socketIO.on('take_snapshot_command', onHandleTakeSnapshotCommand)


    def randomSleep():
       """A short wait is good for quick recovery, but sometimes a longer delay is needed or it will just keep trying and failing short intervals, like because the system thinks the port is still in use and every retry makes the system think it's still in use. So, this has a high likelihood of picking a short interval, but will pick a long one sometimes."""

       timeToWait = random.choice((0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 5))
       print "sleeping", timeToWait
       time.sleep(timeToWait)



    def getVideoPort():


       url = 'http://%s/get_video_port/%s' % (server, cameraIDAnswer)


       for retryNumber in range(2000):
           try:
               print "GET", url
               response = urllib2.urlopen(url).read()
               break
           except:
               print "could not open url ", url
               time.sleep(2)

       return json.loads(response)['mpeg_stream_port']

    def getAudioPort():


       url = 'http://%s/get_audio_port/%s' % (server, cameraIDAnswer)


       for retryNumber in range(2000):
           try:
               print "GET", url
               response = urllib2.urlopen(url).read()
               break
           except:
               print "could not open url ", url
               time.sleep(2)

       return json.loads(response)['audio_stream_port']



    def runFfmpeg(commandLine):

       print commandLine
       ffmpegProcess = subprocess.Popen(shlex.split(commandLine))
       print "command started"

       return ffmpegProcess



    def handleDarwin(deviceNumber, videoPort, audioPort):


       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)

       out, err = p.communicate()

       print err

       deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       commandLine = 'ffmpeg -f qtkit -i %s -f mpeg1video -b 400k -r 30 -s 320x240 http://%s:%s/hello/320/240/' % (deviceAnswer, server, videoPort)

       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': deviceAnswer}


    def handleLinux(deviceNumber, videoPort, audioPort):

       print "sleeping to give the camera time to start working"
       randomSleep()
       print "finished sleeping"


       #p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
       #out, err = p.communicate()
       #print err


       os.system("v4l2-ctl -c brightness={brightness} -c contrast={contrast} -c saturation={saturation}".format(brightness=args.brightness,
                                                                                                                contrast=args.contrast,
                                                                                                                saturation=args.saturation))


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot: ")
       else:
           deviceAnswer = str(deviceNumber)


       #commandLine = '/usr/local/bin/ffmpeg -s 320x240 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/320/240/' % (deviceAnswer, videoPort)
       #commandLine = '/usr/local/bin/ffmpeg -s 640x480 -f video4linux2 -i /dev/video%s -f mpeg1video -b 150k -r 20 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort)
       # For new JSMpeg
       #commandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s -f mpegts -codec:v mpeg1video -s 640x480 -b:v 250k -bf 0 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort) # ClawDaddy
       #commandLine = '/usr/local/bin/ffmpeg -s 1280x720 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/1280/720/' % (deviceAnswer, videoPort)


       if args.rotate180:
           rotationOption = "-vf transpose=2,transpose=2"
       else:
           rotationOption = ""

       # video with audio
       videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, server, videoPort)
       audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -ac 1 -i hw:1 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort)


       print videoCommandLine
       print audioCommandLine

       videoProcess = runFfmpeg(videoCommandLine)
       audioProcess = runFfmpeg(audioCommandLine)

       return {'video_process': videoProcess, 'audioProcess': audioProcess, 'device_answer': deviceAnswer}



    def handleWindows(deviceNumber, videoPort):

       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)


       out, err = p.communicate()
       lines = err.split('\n')

       count = 0

       devices = []

       for line in lines:

           #if "]  \"" in line:
           #    print "line:", line

           m = re.search('.*\\"(.*)\\"', line)
           if m != None:
               #print line
               if m.group(1)[0:1] != '@':
                   print count, m.group(1)
                   devices.append(m.group(1))
                   count += 1


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       else:
           deviceAnswer = str(deviceNumber)

       device = devices[int(deviceAnswer)]
       commandLine = 'ffmpeg -s 640x480 -f dshow -i video="%s" -f mpegts -codec:v mpeg1video -b 200k -r 20 http://%s:%s/hello/640/480/' % (device, server, videoPort)


       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': device}



    def handleWindowsScreenCapture(deviceNumber, videoPort):

       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)


       out, err = p.communicate()

       lines = err.split('\n')

       count = 0

       devices = []

       for line in lines:

           #if "]  \"" in line:
           #    print "line:", line

           m = re.search('.*\\"(.*)\\"', line)
           if m != None:
               #print line
               if m.group(1)[0:1] != '@':
                   print count, m.group(1)
                   devices.append(m.group(1))
                   count += 1


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       else:
           deviceAnswer = str(deviceNumber)



       device = devices[int(deviceAnswer)]
       commandLine = 'ffmpeg -f dshow -i video="screen-capture-recorder" -vf "scale=640:480" -f mpeg1video -b 50k -r 20 http://%s:%s/hello/640/480/' % (server, videoPort)

       print "command line:", commandLine

       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': device}




    def snapShot(operatingSystem, inputDeviceID, filename="snapshot.jpg"):    

       try:
           os.remove('snapshot.jpg')
       except:
           print "did not remove file"

       commandLineDict = {
           'Darwin': 'ffmpeg -y -f qtkit -i %s -vframes 1 %s' % (inputDeviceID, filename),
           'Linux': '/usr/local/bin/ffmpeg -y -f video4linux2 -i /dev/video%s -vframes 1 -q:v 1000 -vf scale=320:240 %s' % (inputDeviceID, filename),
           'Windows': 'ffmpeg -y -s 320x240 -f dshow -i video="%s" -vframes 1 %s' % (inputDeviceID, filename)}

       print commandLineDict[operatingSystem]
       os.system(commandLineDict[operatingSystem])



    def startVideoCapture():

       videoPort = getVideoPort()
       audioPort = getAudioPort()
       print "video port:", videoPort
       print "audio port:", audioPort

       #if len(sys.argv) >= 3:
       #    deviceNumber = sys.argv[2]
       #else:
       #    deviceNumber = None
       deviceNumber = args.video_device_number

       result = None
       if platform.system() == 'Darwin':
           result = handleDarwin(deviceNumber, videoPort, audioPort)
       elif platform.system() == 'Linux':
           result = handleLinux(deviceNumber, videoPort, audioPort)
       elif platform.system() == 'Windows':
           #result = handleWindowsScreenCapture(deviceNumber, videoPort)
           result = handleWindows(deviceNumber, videoPort, audioPort)
       else:
           print "unknown platform", platform.system()

       return result


    def timeInMilliseconds():
       return int(round(time.time() * 1000))



    def main():

       print "main"

       streamProcessDict = None


       twitterSnapCount = 0

       while True:



           socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                               'camera_id':cameraIDAnswer})


           if streamProcessDict is not None:
               print "stopping previously running ffmpeg (needs to happen if this is not the first iteration)"
               streamProcessDict['process'].kill()

           print "starting process just to get device result" # this should be a separate function so you don't have to do this
           streamProcessDict = startVideoCapture()
           inputDeviceID = streamProcessDict['device_answer']
           print "stopping video capture"
           streamProcessDict['process'].kill()

           #print "sleeping"
           #time.sleep(3)
           #frameCount = int(round(time.time() * 1000))

           videoWithSnapshots = False
           while videoWithSnapshots:

               frameCount = timeInMilliseconds()

               print "taking single frame image"
               snapShot(platform.system(), inputDeviceID, filename="single_frame_image.jpg")

               with open ("single_frame_image.jpg", 'rb') as f:

                   # every so many frames, post a snapshot to twitter
                   #if frameCount % 450 == 0:
                   if frameCount % 6000 == 0:
                           data = f.read()
                           print "emit"
                           socketIO.emit('snapshot', {'frame_count':frameCount, 'image':base64.b64encode(data)})
                   data = f.read()

               print "emit"
               socketIO.emit('single_frame_image', {'frame_count':frameCount, 'image':base64.b64encode(data)})
               time.sleep(0)

               #frameCount += 1


           if False:
            if platform.system() != 'Windows':
               print "taking snapshot"
               snapShot(platform.system(), inputDeviceID)
               with open ("snapshot.jpg", 'rb') as f:
                   data = f.read()
               print "emit"

               # skip sending the first image because it's mostly black, maybe completely black
               #todo: should find out why this black image happens
               if twitterSnapCount > 0:
                   socketIO.emit('snapshot', {'image':base64.b64encode(data)})




           print "starting video capture"
           streamProcessDict = startVideoCapture()


           # This loop counts out a delay that occurs between twitter snapshots.
           # Every 50 seconds, it kills and restarts ffmpeg.
           # Every 40 seconds, it sends a signal to the server indicating status of processes.
           period = 2*60*60 # period in seconds between snaps
           for count in range(period):
               time.sleep(1)

               if count % 20 == 0:
                   socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                                       'camera_id':cameraIDAnswer})

               if count % 40 == 30:
                   print "stopping video capture just in case it has reached a state where it's looping forever, not sending video, and not dying as a process, which can happen"
                   streamProcessDict['video_process'].kill()
                   streamProcessDict['audio_process'].kill()
                   time.sleep(1)

               if count % 80 == 75:
                   print "send status about this process and its child process ffmpeg"
                   ffmpegProcessExists = streamProcessDict['process'].poll() is None
                   socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                                       'ffmpeg_process_exists': ffmpegProcessExists,
                                                       'camera_id':cameraIDAnswer})

               #if count % 190 == 180:
               #    print "reboot system in case the webcam is not working"
               #    os.system("sudo reboot")

               # if the video stream process dies, restart it
               if streamProcessDict['video_process'].poll() is not None or streamProcessDict['audio_process'].poll():
                   # wait before trying to start ffmpeg
                   print "ffmpeg process is dead, waiting before trying to restart"
                   randomSleep()
                   streamProcessDict = startVideoCapture()

           twitterSnapCount += 1

    if __name__ == "__main__":


       #if len(sys.argv) > 1:
       #    cameraIDAnswer = sys.argv[1]
       #else:
       #    cameraIDAnswer = raw_input("Enter the Camera ID for your robot, you can get it by pointing a browser to the runmyrobot server %s: " % server)

       cameraIDAnswer = args.camera_id


       main()

    ERROR :

    ffmpeg -n -f mpegts -i http://54.183.232.63:12221 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
    ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.9.2 (Raspbian 4.9.2-10)
     configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
     libavutil      55. 63.100 / 55. 63.100
     libavcodec     57. 96.101 / 57. 96.101
     libavformat    57. 72.101 / 57. 72.101
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 90.100 /  6. 90.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    [mpegts @ 0x1a57390] Could not detect TS packet size, defaulting to non-FEC/DVHS
    http://54.183.232.63:12221: could not find codec parameters