Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (74)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (8568)

  • RTP/UDP or RTSP for accessing stream and passing frame to OpenCV ?

    15 janvier 2020, par xor31four

    Apologies for my inexperience in this domain..I am trying to implement an algorithm that detects the occurrence of a particular event in real-time. The particular event is a consecutive growth of motion across 5 consecutive frames.. almost analogous to a growing sphere or beach ball.

    I am able to detect the event on pre-recorded video that is in .avi format (mjpeg frames) with EmguCV (C# wrapper for OpenCV). The method I use is based on background subtraction.. outlined here https://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/

    The problem is that the live video transport stream is usually in the format rtsp ://XXX.XXX.X.XX/stream1.sdp

    EmguCV on windows can’t decode this h.264 stream for some reason that I am still trying to figure out ... I tried the same url using Python and OpenCV and received a non-matching transport in server reply message similar to this one "Nonmatching transport in server reply" when cv2.VideoCapture rtsp onvif camera, how to fix ? - the answer didn’t work for me.

    I can open the rtsp URL using VLCPlayer and its corresponding C# library - from my understanding it is using ffmpeg, although I may be wrong. FFmpeg on the command line can access the stream.

    EmguCV also uses ffmpeg as a backend which is why I am very confused as to why it can’t open the rtsp URL.

    Here is an image of the module tree when VLCPlayer opens the rtsp stream : enter image description here.

    From my understanding, EmguCV doesn’t use live555 or avcodec..

    I’ve noticed that if I change the streamer configuration to use UDP or RTP rather than RTSP, EmguCV can access the h.264 URL, although the URL is now in the format rtp/udp ://XXX.XXX.X.XX:XXXXX - no .sdp extension.

    I would highly appreciate if someone with more experience can give me some pointers.
    I have a great deal to learn even though I have spent a lot of time researching this topic. In regards to the detections remaining successful would it be recommended to process H.264 frames with possible distortion or MJPEG frames ?

    I can’t afford a delay longer than 1-2 seconds, and would ideally like to continue with the current method used to detect the event.

    From my current understanding, here are the routes I can take :

    1) Use RTP/UDP and process h.264 video using EmguCV - there is some distortion in the video when there is a large amount of movement.. I also receive several h264 error messages during the stream

    [h264 @ 00000124f13a5080] SPS unavailable in decode_picture_timing
    [h264 @ 00000124f13a5080] non-existing PPS 0 referenced
    [h264 @ 00000124f13a5080] decode_slice_header error
    [h264 @ 00000124f13a5080] no frame!
    [h264 @ 00000124f135eac0] Missing reference picture, default is 0
    [h264 @ 00000124f135eac0] decode_slice_header error
    [h264 @ 00000124f13a5080] cbp too large (6929) at 11 20
    [h264 @ 00000124f13a5080] error while decoding MB 11 20
    [h264 @ 00000124f135eac0] top block unavailable for requested intra mode -1
    [h264 @ 00000124f135eac0] error while decoding MB 3 0
    [h264 @ 00000124f124e580] cbp too large (96) at 33 0
    [h264 @ 00000124f124e580] error while decoding MB 33 0
    [h264 @ 00000124f19940c0] top block unavailable for requested intra mode
    [h264 @ 00000124f19940c0] error while decoding MB 1 1

    H.264 video

    2) Keep RTSP protocol, use libav to decode the frames and pass to EmguCV.. following this answer https://www.raspberrypi.org/forums/viewtopic.php?t=83127 .. I’m not sure if this will introduce a huge delay

    3) Keep RTSP protocol, use ffmpeg to convert h.264 stream to MJPEG and access this URL instead ?
    Again I’m not sure if this will be a feasible solution if it will introduce a great delay.

    4) Use a Linux machine rather than windows and configure a gstreamer backend - not ideal

    Thank you for taking the time to read this post.

  • fastest ffmpeg without caring about quality

    31 mai 2019, par RedDeath

    I would like to convert any video into .mp4 as fast as possible without caring about quality loss. I have used the next commands with which I have been able to finish the process in 37 seconds for a 10 second video.

    -vcodec h264
    -crf 32
    -preset ultrafast

    However 37 seconds is still too long for a 10 seconds video. Is there any improvements that I can do to the command in order to reduce the execution time ?


    Edit (Extra info) :

    I’m using FFmpeg Android (implementation 'com.writingminds:FFmpegAndroid:0.3.2') even though commands usually work for any FFmpeg (with a few variants depending on the FFmpeg version).

    The command used in my case which gave me the fastest result so far is :

       mFfmpeg.execute(
           arrayOf(
               "-i" , videoCopy?.path,
               "-vcodec", "h264",
               "-crf", "32",
               "-preset", "ultrafast",
               "-y", uploadFile?.path),
           object : ExecuteBinaryResponseHandler() { ... }

    Which for regular FFmpeg command would be

    "-ffmpeg -i {video?.path} -vcodec h264 -crf 32 -preset ultrafast -y {uploadFile?.path}"

    Where video is my original video File and uploadFile is the File where I want to save the result into.

    In a Samsung J3 (SM-J320M, you can find its specifications online) such command takes the aforementioned 37 seconds.

    After executing such command the first onProgress message returned by FFmpeg prints :

    ffmpeg version n3.0.1 Copyright (c) 2000-2016 the FFmpeg developers  built with gcc 4.8 (GCC)  
    configuration:
       --target-os=linux
       --cross-prefix=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/bin/arm-linux-androideabi-
       --arch=arm
       --cpu=cortex-a8
       --enable-runtime-cpudetect
       --sysroot=/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/sysroot
       --enable-pic
       --enable-libx264
       --enable-libass
       --enable-libfreetype
       --enable-libfribidi
       --enable-libmp3lame
       --enable-fontconfig
       --enable-pthreads
       --disable-debug
       --disable-ffserver
       --enable-version3
       --enable-hardcoded-tables
       --disable-ffplay
       --disable-ffprobe
       --enable-gpl
       --enable-yasm
       --disable-doc
       --disable-shared
       --enable-static
       --pkg-config=/home/vagrant/SourceCode/ffmpeg-android/ffmpeg-pkg-config
       --prefix=/home/vagrant/SourceCode/ffmpeg-android/build/armeabi-v7a
       --extra-cflags='-I/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/include -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fno-strict-overflow -fstack-protector-all'
       --extra-ldflags='-L/home/vagrant/SourceCode/ffmpeg-android/toolchain-android/lib -Wl,-z,relro -Wl,-z,now -pie'
       --extra-libs='-lpng -lexpat -lm'
       --extra-cxxflags=  
           libavutil    55. 17.103 / 55. 17.103  
           libavcodec    57. 24.102 / 57. 24.102
           libavformat    57. 25.100 / 57. 25.100  
           libavdevice    57.  0.101 / 57.  0.101  
           libavfilter    6. 31.100 /  6. 31.100  
           libswscale      4.  0.100 /  4.  0.100  
           libswresample   2.  0.101 /  2.  0.101  
           libpostproc    54.  0.100 / 54.  0.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/storage/emulated/0/DCIM/Yakatak/656.mp4':  Metadata:    major_brand     : mp42    minor_version   : 0    compatible_brands: isommp42    creation_time   : 2019-05-29 11:27:56    location        : +51.5202-000.1435/    location-eng    : +51.5202-000.1435/  Duration: 00:00:09.47, start: 0.000000, bitrate: 12147 kb/s    Stream #0:0(eng): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1280x720, 11899 kb/s, 30.02 fps, 30 tbr, 90k tbn, 180k tbc (default)    Metadata:      rotate          : 90      creation_time   : 2019-05-29 11:27:56      handler_name    : VideoHandle    Side data:      displaymatrix: rotation of -90.00 degrees    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 256 kb/s (default)    Metadata:      creation_time   : 2019-05-29 11:27:56      handler_name    : SoundHandle[libx264 @ 0xb5428800] using cpu capabilities: none![libx264 @ 0xb5428800] profile Constrained Baseline, level 3.1[libx264 @ 0xb5428800] 264 - core 148 - H.264/MPEG-4 AVC codec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=32.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0

    Output #0, mp4, to '/storage/emulated/0/DCIM/Yakatak/uploadFile.mp4':  Metadata:    major_brand     : mp42    minor_version   : 0    compatible_brands: isommp42    location-eng    : +51.5202-000.1435/    location        : +51.5202-000.1435/    encoder         : Lavf57.25.100    Stream #0:0(eng): Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 720x1280, q=-1--1, 30 fps, 15360 tbn, 30 tbc (default)    Metadata:      handler_name    : VideoHandle      creation_time   : 2019-05-29 11:27:56      encoder         : Lavc57.24.102 libx264    Side data:      unknown side data type 10 (24 bytes)    Stream #0:1(eng): Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, stereo, fltp, 128 kb/s (default)    Metadata:      creation_time   : 2019-05-29 11:27:56      handler_name    : SoundHandle      encoder         : Lavc57.24.102 aacStream mapping:  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))  Stream #0:1 -> #0:1 (aac (native) -> aac (native))Press [q] to stop, [?] for helpframe=    0 fps=0.0
  • How to plot an animated graph

    2 août 2019, par Mukonza Sabastian Simbarashe

    Following along How to Create Animated Graphs in Python when constructing an animated plot then on writing the ffmpeg I get the following error :

    'Requested MovieWriter ({}) not available'.format(name))
    RuntimeError: Requested MovieWriter (ffmpeg) not available

    After getting this error, I initially tried to install ffmpeg using pip by the following method :

    python -m install ffmpeg

    and it seems to have successfully installed ffmpeg, but going back to my code I still get the same error

    Find below my code :

    import numpy as np
    import pandas as pd
    import seaborn as sns
    import matplotlib
    import matplotlib.pyplot as plt
    import matplotlib.animation as animation

    overdoses = pd.read_excel(r'C:\Users\ACER\Desktop\overdose_data_1999-2015.xls',sheet_name='Online',skiprows =6)

    def get_data(table,rownum,title):
       data = pd.DataFrame(table.loc[rownum][2:]).astype(float)
       data.columns = {title}
       return data

    title = 'Heroin Overdoses'
    d = get_data(overdoses,18,title)
    x = np.array(d.index)
    y = np.array(d['Heroin Overdoses'])
    overdose = pd.DataFrame(y,x)
    overdose.columns = {title}
    Writer = animation.writers['ffmpeg']

    Here is the stack trace :

    Traceback (most recent call last):
     File "C:\Python\Python36\lib\site-packages\matplotlib\animation.py", line 161, in __getitem__
       return self.avail[name]
    KeyError: 'ffmpeg'

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
     File "", line 1, in <module>
       Writer = animation.writers['ffmpeg']
     File "C:\Python\Python36\lib\site-packages\matplotlib\animation.py", line 164, in __getitem__
       'Requested MovieWriter ({}) not available'.format(name))
    RuntimeError: Requested MovieWriter (ffmpeg) not available
    </module>