Recherche avancée

Médias (2)

Mot : - Tags -/map

Autres articles (50)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6962)

  • lavu/opt : add API for retrieving array-type option values

    25 juillet 2024, par Anton Khirnov
    lavu/opt : add API for retrieving array-type option values
    

    Previously one could only convert the entire array to a string, not
    access individual elements.

    • [DH] doc/APIchanges
    • [DH] libavutil/opt.c
    • [DH] libavutil/opt.h
    • [DH] libavutil/tests/opt.c
    • [DH] libavutil/version.h
    • [DH] tests/ref/fate/opt
  • ADD Image overlay to ffmpeg video stream

    1er juillet 2017, par Chris

    I am new to ffmpeg and want to add an HUD to the video stream, so a few questions.

    1. What file do I need to edit.
    2. What do I need to do to achieve this.

    Thanks in advance. Also I am VERY new to all of this, I will need instructions step by step

    I saw other questions saying to add this : ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4

    But I dont know where to put it, i entered it in the terminal and got this :

    pi@raspberrypi:~ $ ffmpeg -n -i video.mp4 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
    ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.9.2 (Raspbian 4.9.2-10)
     configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
     libavutil      55. 63.100 / 55. 63.100
     libavcodec     57. 96.101 / 57. 96.101
     libavformat    57. 72.101 / 57. 72.101
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 90.100 /  6. 90.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    video.mp4: No such file or directory

    I dont understand what i am supposed to do with the video.mp4 ?

    HERE IS THE SCRIPT THAT SENDS THE VIDEO.

    import subprocess
    import shlex
    import re
    import os
    import time
    import urllib2
    import platform
    import json
    import sys
    import base64
    import random


    import argparse

    parser = argparse.ArgumentParser(description='robot control')
    parser.add_argument('camera_id')
    parser.add_argument('video_device_number', default=0, type=int)
    parser.add_argument('--kbps', default=450, type=int)
    parser.add_argument('--brightness', default=75, type=int, help='camera brightness')
    parser.add_argument('--contrast', default=75, type=int, help='camera contrast')
    parser.add_argument('--saturation', default=15, type=int, help='camera saturation')
    parser.add_argument('--rotate180', default=False, type=bool, help='rotate image 180 degrees')
    parser.add_argument('--env', default="prod")



    args = parser.parse_args()



    server = "runmyrobot.com"
    #server = "52.52.213.92"


    from socketIO_client import SocketIO, LoggingNamespace

    # enable raspicam driver in case a raspicam is being used
    os.system("sudo modprobe bcm2835-v4l2")


    if args.env == "dev":
       print "using dev port 8122"
       port = 8122
    elif args.env == "prod":
       print "using prod port 8022"
       port = 8022
    else:
       print "invalid environment"
       sys.exit(0)


    print "initializing socket io"
    print "server:", server
    print "port:", port
    socketIO = SocketIO(server, port, LoggingNamespace)
    print "finished initializing socket io"

    #ffmpeg -f qtkit -i 0 -f mpeg1video -b 400k -r 30 -s 320x240 http://52.8.81.124:8082/hello/320/240/


    def onHandleCameraCommand(*args):
       #thread.start_new_thread(handle_command, args)
       print args


    socketIO.on('command_to_camera', onHandleCameraCommand)


    def onHandleTakeSnapshotCommand(*args):
       print "taking snapshot"
       inputDeviceID = streamProcessDict['device_answer']
       snapShot(platform.system(), inputDeviceID)
       with open ("snapshot.jpg", 'rb') as f:
           data = f.read()
       print "emit"

       socketIO.emit('snapshot', {'image':base64.b64encode(data)})

    socketIO.on('take_snapshot_command', onHandleTakeSnapshotCommand)


    def randomSleep():
       """A short wait is good for quick recovery, but sometimes a longer delay is needed or it will just keep trying and failing short intervals, like because the system thinks the port is still in use and every retry makes the system think it's still in use. So, this has a high likelihood of picking a short interval, but will pick a long one sometimes."""

       timeToWait = random.choice((0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 5))
       print "sleeping", timeToWait
       time.sleep(timeToWait)



    def getVideoPort():


       url = 'http://%s/get_video_port/%s' % (server, cameraIDAnswer)


       for retryNumber in range(2000):
           try:
               print "GET", url
               response = urllib2.urlopen(url).read()
               break
           except:
               print "could not open url ", url
               time.sleep(2)

       return json.loads(response)['mpeg_stream_port']

    def getAudioPort():


       url = 'http://%s/get_audio_port/%s' % (server, cameraIDAnswer)


       for retryNumber in range(2000):
           try:
               print "GET", url
               response = urllib2.urlopen(url).read()
               break
           except:
               print "could not open url ", url
               time.sleep(2)

       return json.loads(response)['audio_stream_port']



    def runFfmpeg(commandLine):

       print commandLine
       ffmpegProcess = subprocess.Popen(shlex.split(commandLine))
       print "command started"

       return ffmpegProcess



    def handleDarwin(deviceNumber, videoPort, audioPort):


       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)

       out, err = p.communicate()

       print err

       deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       commandLine = 'ffmpeg -f qtkit -i %s -f mpeg1video -b 400k -r 30 -s 320x240 http://%s:%s/hello/320/240/' % (deviceAnswer, server, videoPort)

       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': deviceAnswer}


    def handleLinux(deviceNumber, videoPort, audioPort):

       print "sleeping to give the camera time to start working"
       randomSleep()
       print "finished sleeping"


       #p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "qtkit", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
       #out, err = p.communicate()
       #print err


       os.system("v4l2-ctl -c brightness={brightness} -c contrast={contrast} -c saturation={saturation}".format(brightness=args.brightness,
                                                                                                                contrast=args.contrast,
                                                                                                                saturation=args.saturation))


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot: ")
       else:
           deviceAnswer = str(deviceNumber)


       #commandLine = '/usr/local/bin/ffmpeg -s 320x240 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/320/240/' % (deviceAnswer, videoPort)
       #commandLine = '/usr/local/bin/ffmpeg -s 640x480 -f video4linux2 -i /dev/video%s -f mpeg1video -b 150k -r 20 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort)
       # For new JSMpeg
       #commandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s -f mpegts -codec:v mpeg1video -s 640x480 -b:v 250k -bf 0 http://%s:%s/hello/640/480/' % (deviceAnswer, server, videoPort) # ClawDaddy
       #commandLine = '/usr/local/bin/ffmpeg -s 1280x720 -f video4linux2 -i /dev/video%s -f mpeg1video -b 1k -r 20 http://runmyrobot.com:%s/hello/1280/720/' % (deviceAnswer, videoPort)


       if args.rotate180:
           rotationOption = "-vf transpose=2,transpose=2"
       else:
           rotationOption = ""

       # video with audio
       videoCommandLine = '/usr/local/bin/ffmpeg -f v4l2 -framerate 25 -video_size 640x480 -i /dev/video%s %s -f mpegts -codec:v mpeg1video -s 640x480 -b:v %dk -bf 0 -muxdelay 0.001 http://%s:%s/hello/640/480/' % (deviceAnswer, rotationOption, args.kbps, server, videoPort)
       audioCommandLine = '/usr/local/bin/ffmpeg -f alsa -ar 44100 -ac 1 -i hw:1 -f mpegts -codec:a mp2 -b:a 32k -muxdelay 0.001 http://%s:%s/hello/640/480/' % (server, audioPort)


       print videoCommandLine
       print audioCommandLine

       videoProcess = runFfmpeg(videoCommandLine)
       audioProcess = runFfmpeg(audioCommandLine)

       return {'video_process': videoProcess, 'audioProcess': audioProcess, 'device_answer': deviceAnswer}



    def handleWindows(deviceNumber, videoPort):

       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)


       out, err = p.communicate()
       lines = err.split('\n')

       count = 0

       devices = []

       for line in lines:

           #if "]  \"" in line:
           #    print "line:", line

           m = re.search('.*\\"(.*)\\"', line)
           if m != None:
               #print line
               if m.group(1)[0:1] != '@':
                   print count, m.group(1)
                   devices.append(m.group(1))
                   count += 1


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       else:
           deviceAnswer = str(deviceNumber)

       device = devices[int(deviceAnswer)]
       commandLine = 'ffmpeg -s 640x480 -f dshow -i video="%s" -f mpegts -codec:v mpeg1video -b 200k -r 20 http://%s:%s/hello/640/480/' % (device, server, videoPort)


       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': device}



    def handleWindowsScreenCapture(deviceNumber, videoPort):

       p = subprocess.Popen(["ffmpeg", "-list_devices", "true", "-f", "dshow", "-i", "dummy"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)


       out, err = p.communicate()

       lines = err.split('\n')

       count = 0

       devices = []

       for line in lines:

           #if "]  \"" in line:
           #    print "line:", line

           m = re.search('.*\\"(.*)\\"', line)
           if m != None:
               #print line
               if m.group(1)[0:1] != '@':
                   print count, m.group(1)
                   devices.append(m.group(1))
                   count += 1


       if deviceNumber is None:
           deviceAnswer = raw_input("Enter the number of the camera device for your robot from the list above: ")
       else:
           deviceAnswer = str(deviceNumber)



       device = devices[int(deviceAnswer)]
       commandLine = 'ffmpeg -f dshow -i video="screen-capture-recorder" -vf "scale=640:480" -f mpeg1video -b 50k -r 20 http://%s:%s/hello/640/480/' % (server, videoPort)

       print "command line:", commandLine

       process = runFfmpeg(commandLine)

       return {'process': process, 'device_answer': device}




    def snapShot(operatingSystem, inputDeviceID, filename="snapshot.jpg"):    

       try:
           os.remove('snapshot.jpg')
       except:
           print "did not remove file"

       commandLineDict = {
           'Darwin': 'ffmpeg -y -f qtkit -i %s -vframes 1 %s' % (inputDeviceID, filename),
           'Linux': '/usr/local/bin/ffmpeg -y -f video4linux2 -i /dev/video%s -vframes 1 -q:v 1000 -vf scale=320:240 %s' % (inputDeviceID, filename),
           'Windows': 'ffmpeg -y -s 320x240 -f dshow -i video="%s" -vframes 1 %s' % (inputDeviceID, filename)}

       print commandLineDict[operatingSystem]
       os.system(commandLineDict[operatingSystem])



    def startVideoCapture():

       videoPort = getVideoPort()
       audioPort = getAudioPort()
       print "video port:", videoPort
       print "audio port:", audioPort

       #if len(sys.argv) >= 3:
       #    deviceNumber = sys.argv[2]
       #else:
       #    deviceNumber = None
       deviceNumber = args.video_device_number

       result = None
       if platform.system() == 'Darwin':
           result = handleDarwin(deviceNumber, videoPort, audioPort)
       elif platform.system() == 'Linux':
           result = handleLinux(deviceNumber, videoPort, audioPort)
       elif platform.system() == 'Windows':
           #result = handleWindowsScreenCapture(deviceNumber, videoPort)
           result = handleWindows(deviceNumber, videoPort, audioPort)
       else:
           print "unknown platform", platform.system()

       return result


    def timeInMilliseconds():
       return int(round(time.time() * 1000))



    def main():

       print "main"

       streamProcessDict = None


       twitterSnapCount = 0

       while True:



           socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                               'camera_id':cameraIDAnswer})


           if streamProcessDict is not None:
               print "stopping previously running ffmpeg (needs to happen if this is not the first iteration)"
               streamProcessDict['process'].kill()

           print "starting process just to get device result" # this should be a separate function so you don't have to do this
           streamProcessDict = startVideoCapture()
           inputDeviceID = streamProcessDict['device_answer']
           print "stopping video capture"
           streamProcessDict['process'].kill()

           #print "sleeping"
           #time.sleep(3)
           #frameCount = int(round(time.time() * 1000))

           videoWithSnapshots = False
           while videoWithSnapshots:

               frameCount = timeInMilliseconds()

               print "taking single frame image"
               snapShot(platform.system(), inputDeviceID, filename="single_frame_image.jpg")

               with open ("single_frame_image.jpg", 'rb') as f:

                   # every so many frames, post a snapshot to twitter
                   #if frameCount % 450 == 0:
                   if frameCount % 6000 == 0:
                           data = f.read()
                           print "emit"
                           socketIO.emit('snapshot', {'frame_count':frameCount, 'image':base64.b64encode(data)})
                   data = f.read()

               print "emit"
               socketIO.emit('single_frame_image', {'frame_count':frameCount, 'image':base64.b64encode(data)})
               time.sleep(0)

               #frameCount += 1


           if False:
            if platform.system() != 'Windows':
               print "taking snapshot"
               snapShot(platform.system(), inputDeviceID)
               with open ("snapshot.jpg", 'rb') as f:
                   data = f.read()
               print "emit"

               # skip sending the first image because it's mostly black, maybe completely black
               #todo: should find out why this black image happens
               if twitterSnapCount > 0:
                   socketIO.emit('snapshot', {'image':base64.b64encode(data)})




           print "starting video capture"
           streamProcessDict = startVideoCapture()


           # This loop counts out a delay that occurs between twitter snapshots.
           # Every 50 seconds, it kills and restarts ffmpeg.
           # Every 40 seconds, it sends a signal to the server indicating status of processes.
           period = 2*60*60 # period in seconds between snaps
           for count in range(period):
               time.sleep(1)

               if count % 20 == 0:
                   socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                                       'camera_id':cameraIDAnswer})

               if count % 40 == 30:
                   print "stopping video capture just in case it has reached a state where it's looping forever, not sending video, and not dying as a process, which can happen"
                   streamProcessDict['video_process'].kill()
                   streamProcessDict['audio_process'].kill()
                   time.sleep(1)

               if count % 80 == 75:
                   print "send status about this process and its child process ffmpeg"
                   ffmpegProcessExists = streamProcessDict['process'].poll() is None
                   socketIO.emit('send_video_status', {'send_video_process_exists': True,
                                                       'ffmpeg_process_exists': ffmpegProcessExists,
                                                       'camera_id':cameraIDAnswer})

               #if count % 190 == 180:
               #    print "reboot system in case the webcam is not working"
               #    os.system("sudo reboot")

               # if the video stream process dies, restart it
               if streamProcessDict['video_process'].poll() is not None or streamProcessDict['audio_process'].poll():
                   # wait before trying to start ffmpeg
                   print "ffmpeg process is dead, waiting before trying to restart"
                   randomSleep()
                   streamProcessDict = startVideoCapture()

           twitterSnapCount += 1

    if __name__ == "__main__":


       #if len(sys.argv) > 1:
       #    cameraIDAnswer = sys.argv[1]
       #else:
       #    cameraIDAnswer = raw_input("Enter the Camera ID for your robot, you can get it by pointing a browser to the runmyrobot server %s: " % server)

       cameraIDAnswer = args.camera_id


       main()

    ERROR :

    ffmpeg -n -f mpegts -i http://54.183.232.63:12221 -i logo.png -filter_complex "[0:v]setsar=sar=1[v];[v][1]blend=all_mode='overlay':all_opacity=0.7" -movflags +faststart tmb/video.mp4
    ffmpeg version N-86215-gb5228e4 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 4.9.2 (Raspbian 4.9.2-10)
     configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree --extra-libs=-ldl
     libavutil      55. 63.100 / 55. 63.100
     libavcodec     57. 96.101 / 57. 96.101
     libavformat    57. 72.101 / 57. 72.101
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 90.100 /  6. 90.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    [mpegts @ 0x1a57390] Could not detect TS packet size, defaulting to non-FEC/DVHS
    http://54.183.232.63:12221: could not find codec parameters
  • Compile OpenCV3 with Cuda7.5 ffmpeg (latest) issue OpenSUSE 13.2

    23 janvier 2016, par Ingeborg

    I am using an OpenSUSE 13.2-X86-64 and a Geforce 940M GPU. I want to work with it on the Qt5 IDE.
    For this purpose, i have installed my GPU Driver with the Cuda7.5 toolkit rpm from the cudazone. Everything is nearly fine.
    It detects everything I have made and executed a couple of the cuda samples.

    As next step, i have installed the current FFmpeg version with nvenc and other libraries like Xvid and many of
    the other usefull stuff which would be to much to list it here. After that i have downloaded the current
    OpenCV-3.0.0 source code and ran cmake-gui where i have added cuda, ffmpeg, Qt5 etc and maked it.

    On different points of the make session (make -j4) i get this kind of mistake from my console (The list of the Multiple definition
    error is much longer). This is the first one.

    .
    .
    .
    .
    .
    .
    nvlink error   : Multiple definition of '_ZN2cv5cudev16color_cvt_detail15c_HlsSectorDataE' in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_arithm_func.cu.o', first defined in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_lut.cu.o'
    nvlink error   : Multiple definition of '_ZN2cv5cudev16color_cvt_detail16c_sRGBGammaTab_bE' in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_arithm_func.cu.o', first defined in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_lut.cu.o'
    nvlink error   : Multiple definition of '_ZN2cv5cudev16color_cvt_detail14c_sRGBGammaTabE' in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_arithm_func.cu.o', first defined in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_lut.cu.o'
    nvlink error   : Multiple definition of '_ZN2cv5cudev16color_cvt_detail17c_sRGBInvGammaTabE' in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_arithm_func.cu.o', first defined in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_lut.cu.o'
    nvlink error   : Multiple definition of '_ZN2cv5cudev16color_cvt_detail12c_LabCbrtTabE' in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_arithm_func.cu.o', first defined in '/home/peter/Programme/opencv/build/modules/cudev/test/CMakeFiles/opencv_test_cudev.dir//./opencv_test_cudev_generated_test_lut.cu.o'
    modules/cudev/test/CMakeFiles/opencv_test_cudev.dir/build.make:5302: recipe for target 'modules/cudev/test/CMakeFiles/opencv_test_cudev.dir/./opencv_test_cudev_intermediate_link.o' failed
    make[2]: *** [modules/cudev/test/CMakeFiles/opencv_test_cudev.dir/./opencv_test_cudev_intermediate_link.o] Error 255
    CMakeFiles/Makefile2:1182: recipe for target 'modules/cudev/test/CMakeFiles/opencv_test_cudev.dir/all' failed
    make[1]: *** [modules/cudev/test/CMakeFiles/opencv_test_cudev.dir/all] Error 2
    Makefile:137: recipe for target 'all' failed
    make: *** [all] Error 2

    And I have no idea how to fix that.

    Thx !

    Edit : added cmake configuration

    ~/Programme/opencv/build> cmake /home/peter/Programme/opencv-3.0.0
    CMake Error: The source "/home/peter/Programme/opencv-3.0.0/CMakeLists.txt" does not match the source "/home/peter/Programme/opencv/CMakeLists.txt" used to generate cache.  Re-run cmake with a different source directory.
    peter@linux-3mgc:~/Programme/opencv/build> cmake /home/peter/Programme/opencv
    -- Detected version of GNU GCC: 48 (408)
    -- Found ZLIB: /usr/lib64/libz.so (found suitable version "1.2.8", minimum required is "1.2.3")
    -- Found ZLIB: /usr/lib64/libz.so (found version "1.2.8")
    -- checking for module 'gstreamer-video-0.10'
    --   package 'gstreamer-video-0.10' not found
    -- checking for module 'gstreamer-app-0.10'
    --   package 'gstreamer-app-0.10' not found
    -- checking for module 'gstreamer-riff-0.10'
    --   package 'gstreamer-riff-0.10' not found
    -- checking for module 'gstreamer-pbutils-0.10'
    --   package 'gstreamer-pbutils-0.10' not found
    -- Looking for linux/videodev.h
    -- Looking for linux/videodev.h - not found
    -- Looking for linux/videodev2.h
    -- Looking for linux/videodev2.h - found
    -- Looking for sys/videoio.h
    -- Looking for sys/videoio.h - not found
    -- checking for module 'libavresample'
    --   package 'libavresample' not found
    -- Looking for libavformat/avformat.h
    -- Looking for libavformat/avformat.h - found
    -- Looking for ffmpeg/avformat.h
    -- Looking for ffmpeg/avformat.h - not found
    -- found IPP (ICV version): 8.2.1 [8.2.1]
    -- at: /home/peter/Programme/opencv/3rdparty/ippicv/unpack/ippicv_lnx
    -- CUDA detected: 7.5
    -- CUDA NVCC target flags: -gencode;arch=compute_50,code=sm_50
    -- To enable PlantUML support, set PLANTUML_JAR environment variable or pass -DPLANTUML_JAR=<filepath> option to cmake
    -- Found PythonInterp: /usr/bin/python2.7 (found suitable version "2.7.8", minimum required is "2.7")
    -- Found PythonLibs: /usr/lib64/libpython2.7.so (found suitable exact version "2.7.8")
    -- Found PythonInterp: /usr/bin/python3.4 (found suitable version "3.4.1", minimum required is "3.4")
    -- Found PythonLibs: /usr/lib64/libpython3.4m.so (found suitable exact version "3.4.1")
    Traceback (most recent call last):
    File "<string>", line 1, in <module>
    ImportError: No module named 'numpy'
    -- Found apache ant 1.8.0: /usr/bin/ant
    -- Could NOT find JNI (missing:  JAVA_INCLUDE_PATH JAVA_INCLUDE_PATH2 JAVA_AWT_INCLUDE_PATH)
    -- Could NOT find Matlab (missing:  MATLAB_MEX_SCRIPT      MATLAB_INCLUDE_DIRS MATLAB_ROOT_DIR MATLAB_LIBRARIES MATLAB_LIBRARY_DIRS MATLAB_MEXEXT MATLAB_ARCH MATLAB_BIN)
    -- VTK support is disabled. Incompatible combination: OpenCV + Qt5 and VTK ver.6.1.0 + Qt4
    --
    -- General configuration for OpenCV 3.0.0-dev =====================================
    --   Version control:               3.0.0-528-g3a3f403-dirty
    --
    --   Platform:
    --     Host:                        Linux 3.16.7-24-desktop x86_64
    --     CMake:                       3.0.2
    --     CMake generator:             Unix Makefiles
    --     CMake build tool:            /usr/bin/gmake
    --     Configuration:               Release
    --
    --   C/C++:
    --     Built as dynamic libs?:      YES
    --     C++ Compiler:                /usr/bin/c++  (ver 4.8.3)
    --     C++ flags (Release):         -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -fopenmp -O3 -DNDEBUG  -DNDEBUG
    --     C++ flags (Debug):           -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wno-narrowing -Wno-delete-non-virtual-dtor -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fvisibility-inlines-hidden -fopenmp -g  -O0 -DDEBUG -D_DEBUG
    --     C Compiler:                  /usr/bin/cc
    --     C flags (Release):           -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fopenmp -O3 -DNDEBUG  -DNDEBUG
    --     C flags (Debug):             -fsigned-char -W -Wall -Werror=return-type -Werror=non-virtual-dtor -Werror=address -Werror=sequence-point -Wformat -Werror=format-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wno-narrowing -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -msse -msse2 -mno-avx -msse3 -mno-ssse3 -mno-sse4.1 -mno-sse4.2 -ffunction-sections -fvisibility=hidden -fopenmp -g  -O0 -DDEBUG -D_DEBUG
    --     Linker flags (Release):      
    --     Linker flags (Debug):        
    --     Precompiled headers:         YES
    --     Extra dependencies:          /usr/lib64/libcuda.so /usr/lib64/libnvcuvid.so Qt5::Core Qt5::Gui Qt5::Widgets Qt5::Test Qt5::Concurrent Qt5::OpenGL /usr/lib64/libwebp.so /usr/lib64/libpng.so /usr/lib64/libz.so /usr/lib64/libtiff.so /usr/lib64/libjasper.so /usr/lib64/libjpeg.so gstbase-0.10 gstreamer-0.10 gobject-2.0 gmodule-2.0 gthread-2.0 xml2 ucil glib-2.0 unicap dc1394 xine v4l1 v4l2 avcodec avformat avutil swscale gphoto2 gphoto2_port exif /usr/lib64/libbz2.so dl m pthread rt /usr/lib64/libGLU.so /usr/lib64/libGL.so /usr/lib64/libSM.so /usr/lib64/libICE.so /usr/lib64/libX11.so /usr/lib64/libXext.so cudart nppc nppi npps cublas cufft
    --     3rdparty dependencies:       IlmImf ippicv
    --
    --   OpenCV modules:
    --     To be built:                 cudev hal core cudaarithm flann imgproc ml video cudabgsegm cudafilters cudaimgproc cudawarping imgcodecs photo shape videoio cudacodec highgui objdetect ts features2d calib3d cudafeatures2d cudalegacy cudaobjdetect cudaoptflow cudastereo stitching superres videostab python2
    --     Disabled:                    world
    --     Disabled by dependency:      -
    --     Unavailable:                 java python3 viz
    --
    --   GUI:
    --     QT 5.x:                      YES (ver 5.4.2)
    --     QT OpenGL support:           YES (Qt5::OpenGL 5.4.2)
    --     OpenGL support:              YES (/usr/lib64/libGLU.so /usr/lib64/libGL.so /usr/lib64/libSM.so /usr/lib64/libICE.so /usr/lib64/libX11.so /usr/lib64/libXext.so)
    --     VTK support:                 NO
    --
    --   Media I/O:
    --     ZLib:                        /usr/lib64/libz.so (ver 1.2.8)
    --     JPEG:                        /usr/lib64/libjpeg.so (ver )
    --     WEBP:                        /usr/lib64/libwebp.so (ver encoder: 0x0202)
    --     PNG:                         /usr/lib64/libpng.so (ver 1.2.51)
    --     TIFF:                        /usr/lib64/libtiff.so (ver 42 - 4.0.4)
    --     JPEG 2000:                   /usr/lib64/libjasper.so (ver 1.900.1)
    --     OpenEXR:                     build (ver 1.7.1)
    --     GDAL:                        NO
    --
    --   Video I/O:
    --     DC1394 1.x:                  NO
    --     DC1394 2.x:                  YES (ver 2.2.2)
    --     FFMPEG:                      YES
    --       codec:                     YES (ver 57.3.100)
    --       format:                    YES (ver 57.2.100)
    --       util:                      YES (ver 55.2.100)
    --       swscale:                   YES (ver 4.0.100)
    --       resample:                  NO
    --       gentoo-style:              YES
    --     GStreamer:                   NO
    --     OpenNI:                      NO
    --     OpenNI PrimeSensor Modules:  NO
    --     OpenNI2:                     NO
    --     PvAPI:                       NO
    --     GigEVisionSDK:               NO
    --     UniCap:                      YES (ver 0.9.12)
    --     UniCap ucil:                 YES (ver 0.9.10)
    --     V4L/V4L2:                    Using libv4l1 (ver 1.2.1) / libv4l2 (ver 1.2.1)
    --     XIMEA:                       NO
    --     Xine:                        YES (ver 1.2.6)
    --     gPhoto2:                     YES
    --
    --   Parallel framework:            OpenMP
    --
    --   Other third-party libraries:
    --     Use IPP:                     8.2.1 [8.2.1]
    --          at:                     /home/peter/Programme/opencv/3rdparty/ippicv/unpack/ippicv_lnx
    --     Use IPP Async:               NO
    --     Use VA:                      NO
    --     Use Intel VA-API/OpenCL:     NO
    --     Use Eigen:                   YES (ver 3.2.2)
    --     Use Cuda:                    YES (ver 7.5)
    --     Use OpenCL:                  YES
    --
    --   NVIDIA CUDA
    --     Use CUFFT:                   YES
    --     Use CUBLAS:                  YES
    --     USE NVCUVID:                 YES
    --     NVIDIA GPU arch:             50
    --     NVIDIA PTX archs:
    --     Use fast math:               YES
    --
    --   OpenCL:
    --     Version:                     dynamic
    --     Include path:                /home/peter/Programme/opencv/3rdparty/include/opencl/1.2
    --     Use AMDFFT:                  NO
    --     Use AMDBLAS:                 NO
    --
    --   Python 2:
    --     Interpreter:                 /usr/bin/python2.7 (ver 2.7.8)
    --     Libraries:                   /usr/lib64/libpython2.7.so (ver 2.7.8)
    --     numpy:                       /usr/lib64/python2.7/site-packages/numpy/core/include (ver 1.9.0)
    --     packages path:               lib/python2.7/site-packages
    --
    --   Python 3:
    --     Interpreter:                 /usr/bin/python3.4 (ver 3.4.1)
    --
    --   Python (for build):            /usr/bin/python2.7
    --
    --   Java:
    --     ant:                         /usr/bin/ant (ver 1.8.0)
    --     JNI:                         NO
    --     Java wrappers:               NO
    --     Java tests:                  NO
    --
    --   Matlab:
    --     mex:                         NO
    --
    --   Documentation:
    --     Doxygen:                     /usr/bin/doxygen (ver 1.8.8)
    --     PlantUML:                    NO
    --
    --   Tests and samples:
    --     Tests:                       YES
    --     Performance tests:           YES
    --     C/C++ Examples:              NO
    --
    --   Install path:                  /usr/local
    --
    --   cvconfig.h is in:              /home/peter/Programme/opencv/build
    --        
    -----------------------------------------------------------------
    --
    -- Configuring done
    -- Generating done
    -- Build files have been written to: /home/peter/Programme/opencv/build
    </module></string></filepath>