Recherche avancée

Médias (91)

Autres articles (30)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (4825)

  • avconv : from multiple png to movie not exporting correctly

    13 avril 2016, par abhra

    I am trying to generate a mp4 movie from a set of pngs using avconv on my debian8 system. The mp4 file its generating basically looping over the first figure for some time. Here is the command I am using

    avconv -r 10 -start_number 8 -i images_%06d.png -b:v 1000k -vf scale=640 :-1 test.mp4

    Output is

    avconv version 11.6-6:11.6-1 deb8u1, Copyright (c) 2000-2014 the Libav developers built on Mar 2 2016 23:00:02 with gcc 4.9.2
    (Debian 4.9.2-10) Input #0, image2, from ’images_%06d.png’ :
    Duration : 00:00:16.00, start : 0.000000, bitrate : N/A
    Stream #0.0 : Video : png, rgb24, 2400x1801, 25 fps, 25 tbn File ’test.mp4’ already exists. Overwrite ? [y/N] y [scale @ 0x820c60] The
    ::flags= option syntax is deprecated. Use either
    : : or w=:h=:flags=. [libx264 @ 0x837760]
    using cpu capabilities : MMX2 SSE2Fast SSSE3 SSE4.2 AVX AVX2 FMA3 LZCNT
    BMI2 [libx264 @ 0x837760] profile High, level 2.2 [libx264 @ 0x837760]
    264 - core 142 r2431 a5831aa - H.264/MPEG-4 AVC codec - Copyleft
    2003-2014 - http://www.videolan.org/x264.html - options : cabac=1 ref=3
    deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00
    mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0
    deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12
    lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0
    bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1
    b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250
    keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr
    mbtree=1 bitrate=1000 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4
    ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to ’test.mp4’ : Metadata :
    encoder : Lavf56.1.0
    Stream #0.0 : Video : libx264, yuv420p, 640x480, q=-1—1, 1000 kb/s, 10 fps, 10 tbn, 10 tbc
    Metadata :
    encoder : Lavc56.1.0 libx264 Stream mapping : Stream #0:0 -> #0:0 (png (native) -> h264 (libx264)) Press ctrl-c to stop encoding frame= 17 fps= 0 q=0.0 size= 0kB time=10000000000.00
    bitrate= 0.0kbitframe= 34 fps= 32 q=0.0 size= 0kB
    time=10000000000.00 bitrate= 0.0kbitframe= 51 fps= 32 q=0.0 size=
    0kB time=10000000000.00 bitrate= 0.0kbitframe= 393 fps= 32 q=0.0
    Lsize= 139kB time=39.00 bitrate= 29.2kbits/s video:132kB
    audio:0kB other streams:0kB global headers:0kB muxing overhead :
    5.284794% [libx264 @ 0x837760] frame I:2 Avg QP : 3.69 size : 51320 [libx264 @ 0x837760] frame P:99 Avg QP : 0.32 size : 242 [libx264
    @ 0x837760] frame B:292 Avg QP : 0.15 size : 26 [libx264 @
    0x837760] consecutive B-frames : 0.8% 0.5% 0.0% 98.7% [libx264 @
    0x837760] mb I I16..4 : 45.5% 19.2% 35.3% [libx264 @ 0x837760] mb P
    I16..4 : 0.0% 0.0% 0.0% P16..4 : 0.9% 0.0% 0.0% 0.0% 0.0%
    skip:99.0% [libx264 @ 0x837760] mb B I16..4 : 0.0% 0.0% 0.0%
    B16..8 : 0.2% 0.0% 0.0% direct : 0.0% skip:99.8% L0:22.3% L1:77.7%
    BI : 0.0% [libx264 @ 0x837760] final ratefactor : -21.09 [libx264 @
    0x837760] 8x8 transform intra:18.9% inter:41.7% [libx264 @ 0x837760]
    coded y,uvDC,uvAC intra : 34.1% 0.0% 0.0% inter : 0.1% 0.0% 0.0%
    [libx264 @ 0x837760] i16 v,h,dc,p : 81% 14% 5% 0% [libx264 @
    0x837760] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu : 49% 14% 36% 0% 0% 0% 0%
    0% 0% [libx264 @ 0x837760] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu : 36% 28% 19%
    3% 3% 2% 4% 2% 3% [libx264 @ 0x837760] i8c dc,h,v,p : 100% 0% 0%
    0% [libx264 @ 0x837760] Weighted P-Frames : Y:0.0% UV:0.0% [libx264 @
    0x837760] ref P L0 : 99.3% 0.0% 0.6% 0.1% [libx264 @ 0x837760] ref B
    L0 : 4.2% 95.8% [libx264 @ 0x837760] ref B L1 : 99.8% 0.2% [libx264 @
    0x837760] kb/s:27.34

    I have also tried

    cat *.png | avconv -f image2pipe -i - -b:v 1000k -vf scale=640 :-1 test2.mp4

    output shows

    avconv version 11.6-6:11.6-1 deb8u1, Copyright (c) 2000-2014 the Libav developers built on Mar 2 2016 23:00:02 with gcc 4.9.2
    (Debian 4.9.2-10) Codec AVOption b (set bitrate (in bits/s)) specified
    for output file #0 (test2.mp4) has not been used for any stream. The
    most likely reason is either wrong type (e.g. a video option with no
    video streams) or that it is a private option of some encoder which
    was not actually used for any stream. Output #0, image2pipe, to
    ’test2.mp4’ : Output file #0 does not contain any stream

    When getting frames fromtest.mp4

    avconv -i test.mp4 -r 30 -f image2 %04d.png

    I found 1000 or more copies of the images_000001.png.

    Would you please help, whether I have made any mistake in commands ? Or am I missing some codec options ? Thanks for the help.

  • JavaCV grab frame method delays and returns old frames

    27 janvier 2019, par Null Pointer

    I’m trying to create a video player in Java using JavaCV and its FFmpegFrameGrabber class. Simply, Inside a loop, I use

    .grab() to get a frame and then paint it on a panel

    Problem is, player gets delayed. For example, after 30 seconds passes, in video only 20 seconds passes.

    Source is ok. Other players can play the stream normally. Problem is possibly the long printing time.

    What I do not understand is that : "why does .grab() method brings me a frame from 10 seconds ago ?" Shouldn’t it just grab the frame which is being streamed at the moment ?

    (Sorry for not providing a working code, it’s all over different huge classes)

    I use the following grabber options (selected by some other colleague) :

    grabber.setImageHeight(480);
    grabber.setImageWidth(640);
    grabber.setOption("reconnect", "1");
    grabber.setOption("reconnect_at_eof", "1");
    grabber.setOption("reconnect_streamed", "1");
    grabber.setOption("reconnect_delay_max", "2");
    grabber.setOption("preset", "veryfast");
    grabber.setOption("probesize", "192");
    grabber.setOption("tune", "zerolatency");
    grabber.setFrameRate(30.0);
    grabber.setOption("buffer_size", "" + this.bufferSize);
    grabber.setOption("max_delay", "500000");
    grabber.setOption("stimeout", String.valueOf(6000000));
    grabber.setOption("loglevel", "quiet");
    grabber.start();

    Thanks

  • OpenCV returns an Empty Frame on video.read

    20 mars 2019, par Ikechukwu Anude

    Below is the relevant code

    import cv2 as cv
    import numpy as np

    video = cv.VideoCapture(0) #tells obj to use built in camera\

    #create a face cascade object
    face_cascade  =
    cv.CascadeClassifier(r"C:\Users\xxxxxxx\AppData\Roaming\Python\Python36\site-
    packages\cv2\data\haarcascade_frontalcatface.xml")

    a = 1
    #create loop to display a video
    while True:
       a = a + 1
       check, frame = video.read()
       print(frame)

       #converts to a gray scale img
       gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)

       #create the faces
       faces = face_cascade.detectMultiScale(gray, scaleFactor=1.5, minNeighbors=5)

       for(x, y, w, h) in faces:
           print(x, y, w, h)

       #show the image
       cv.imshow('capturing', gray)

       key = cv.waitKey(1) #gen a new frame every 1ms

       if key == ord('q'): #once you enter 'q' the loop will be exited
           break

    print(a) #this will print the number of frames

    #captures the first frame
    video.release() #end the web cam

    #destroys the windows when you are not defined
    cv.destroyAllWindows()

    The code displays a video captured from my webcam camera. Despite that, OpevCV doesn’t seem to be processing any frames as all the frames look like this

    [[0 0 0]
     [0 0 0]
     [0 0 0]
     ...
     [0 0 0]
     [0 0 0]
     [0 0 0]]]

    which I assume means that they are empty.

    This I believe is preventing the algorithm from being able to detect my face in the frame. I have a feeling that the issue lies in the ffmpeg codec, but I’m not entirely sure how to proceed even if that is the case.

    OS : Windows 10
    Language : Python

    Why is the frame empty and how can I get OpenCV to detect my face in the frame ?