Recherche avancée

Médias (0)

Mot : - Tags -/protocoles

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (47)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (11252)

  • Improve performance of FFMPEG with Java [on hold]

    20 janvier 2018, par Nitishkumar Singh

    This question is actually for improving performance of an Java application. In this java application, I am trying to convert an video into small clips.

    Here is the implementation class for the same

    package ffmpeg.clip.process;

    import java.io.BufferedReader;
    import java.io.IOException;
    import java.io.InputStreamReader;
    import java.nio.file.Paths;
    import java.time.Duration;
    import java.time.LocalTime;
    import java.time.temporal.ChronoUnit;
    import java.util.concurrent.CompletableFuture;
    import java.util.concurrent.ExecutionException;
    import java.util.stream.Collectors;

    import ffmpeg.clip.utils.VideoConstant;
    import ffmpeg.clip.utils.VideoUtils;

    /*
    * @author Nitishkumar Singh
    * @Description: class will use ffmpeg to break an source video into clips
    */
    public class VideoToClip {

       /*
        * Prevent from creating instance
        */
       private VideoToClip() {
       }

       /**
        * Get Video Duration is milliseconds
        *
        * @Exception IOException - File does not exist VideoException- Video File have data issues
        */
       static LocalTime getDuration(String sourceVideoFile) throws Exception {
           if (!Paths.get(sourceVideoFile).toFile().exists())
               throw new Exception("File does not exist!!");

           Process proc = new ProcessBuilder(VideoConstant.SHELL, VideoConstant.SHELL_COMMAND_STRING_ARGUMENT,
                   String.format(VideoConstant.DURATION_COMMAND, sourceVideoFile)).start();
           boolean errorOccured = (new BufferedReader(new InputStreamReader(proc.getErrorStream())).lines()
                   .count() > VideoConstant.ZERO);
           String durationInSeconds = new BufferedReader(new InputStreamReader(proc.getInputStream())).lines()
                   .collect(Collectors.joining(System.lineSeparator()));
           proc.destroy();
           if (errorOccured || (durationInSeconds.length() == VideoConstant.ZERO))
               throw new Exception("Video File have some issues!");
           else
               return VideoUtils.parseHourMinuteSecondMillisecondFormat(durationInSeconds);
       }

       /**
        * Create Clips for Video Using Start and End Second
        *
        * @Exception IOException - Clip Creation Process Failed InterruptedException - Clip Creation task get's failed
        */
       static String toClipProcess(String sourceVideo, String outputDirectory, LocalTime start, LocalTime end,
               String fileExtension) throws IOException, InterruptedException, ExecutionException {

           String clipName = String.format(VideoConstant.CLIP_FILE_NAME,
                   VideoUtils.getHourMinuteSecondMillisecondFormat(start),
                   VideoUtils.getHourMinuteSecondMillisecondFormat(end), fileExtension);

           String command = String.format(VideoConstant.FFMPEG_OUTPUT_COMMAND, sourceVideo,
                   VideoUtils.getHourMinuteSecondMillisecondFormat(start),
                   VideoUtils.getHourMinuteSecondMillisecondFormat(end.minus(start.toNanoOfDay(), ChronoUnit.NANOS)),
                   outputDirectory, clipName);
           LocalTime startTime = LocalTime.now();
           System.out.println("Clip Name: " + clipName);
           System.out.println("FFMPEG Process Execution Started");
           CompletableFuture<process> completableFuture = CompletableFuture.supplyAsync(() -> {
               try {
                   return executeProcess(command);
               } catch (InterruptedException | IOException ex) {
                   throw new RuntimeException(ex);
               }
           });
           completableFuture.get();
           // remove
           LocalTime endTime = LocalTime.now();
           System.out.println("Clip Name: " + clipName);
           System.out.println("FFMPEG Process Execution Finished");
           System.out.println("Duration: " + Duration.between(startTime, endTime).toMillis() / 1000);

           return clipName;
       }

       /**
        * Create and Execute Process for each command
        */
       static Process executeProcess(String command) throws InterruptedException, IOException {
           Process clipProcess = Runtime.getRuntime().exec(command);
           clipProcess.waitFor();
           return clipProcess;
       }
    }
    </process>

    The Entire Solution is availble at Github. So I am actually using CompletableFuture and running FFMPEG command by creating Java Process. The time it takes is too much. For 40 minutes video, it takes more than 49 minutes, that too on a 64 cpu machine. I am trying to reduce the core size to 8 or something, as well improve it’s performance. Any Help would be really great.

  • Pipe video frame to OpenCV image and then to FFmpeg

    8 février 2018, par Pureheart

    There is a similar question here :
    Getting ’av_interleaved_write_frame() : Broken pipe’ error

    But what should I do if I want to write the data ?

    I put pipe_out.stdin.write(image.tostring()) in the while loop, like this

    FFMPEG_BIN = "/home/media/Downloads/ffmpeg"
    import subprocess as sp
    import sys
    width = 360
    height = 240
    command_in = [ FFMPEG_BIN,
               '-i', '/home/media/Videos/mytestvideo/zhou.avi',
               '-f', 'image2pipe',
               '-pix_fmt', 'bgr24',
               '-vcodec', 'rawvideo', '-']
    pipe_in = sp.Popen(command_in, stdout = sp.PIPE, bufsize = 10**8)

    command_out = [ FFMPEG_BIN,
           '-y', # (optional) overwrite output file if it exists
           '-f', 'rawvideo',
           '-vcodec','rawvideo',
           '-s', '360x240', # size of one frame
           '-pix_fmt', 'bgr24',
           '-r', '28', # frames per second
           '-i', '-', # The imput comes from a pipe
           '-an', # Tells FFMPEG not to expect any audio
           #'-vcodec', 'mpeg',
           'my_output_videofile.mp4' ]

    pipe_out = sp.Popen( command_out, stdin=sp.PIPE, stderr=sp.PIPE)

    import numpy
    import cv2
    import pylab
    # read width*height*3 bytes (= 1 frame)
    while True:
       raw_image = pipe_in.stdout.read(width*height*3)
       image =  numpy.fromstring(raw_image, dtype='uint8')
       image = image.reshape((height,width,3))
       pipe_in.communicate()
       pipe_out.stdin.write(image.tostring())
       pipe_out.communicate()
       pipe_in.stdout.flush()
       #cv2.imshow('image',image)
       #cv2.waitKey(0)
       # throw away the data in the pipe's buffer.


    '''
    pipe_in.stdin.close()
    pipe_in.stderr.close()
    pipe_in.wait()
    pipe_out.stdout.close()
    pipe_out.stderr.close()
    pipe_out.wait()
    '''
    #pipe_out.stdin.write(image.tostring())

    However, the output video has only 1 frame(the first frame of input video)

    Any ideas ?

    Thanks !

  • How to record a webcam to a file outside of X11 ?

    13 décembre 2017, par Dav Clark

    I’m working with teachers to automatically record their classes, so we can review them and improve the quality of teaching. We have computers running Ubuntu 17.10 with multiple webcams in a couple of classrooms - but I could run other software if it makes this task easier.

    I can successfully record a stream from the webcam to an h264 encoded file using gstreamer. The following should work for most people with gstreamer installed, but I’ve got fancier pipelines using vaapi that can simultaneously encode multiple 4k streams on a NUC with room to spare ! My point is that Gstreamer works great when I’m typing at a terminal in the GUI. The example :

    .\gst-launch-1.0.exe -e autovideosrc ! videoconvert ! \
     openh264enc max-bitrate=256000 ! h264parse ! \
     mp4mux ! filesink location=somefile.mp4

    I imagine I could also do this with ffmpeg, or OpenCV, or maybe even VLC (I can record a webcam via the GUI, so I guess I could use that to generate a command line ?).

    But when I tried any of the above, for example, via SSH, I get errors from GStreamer and OpenCV, and blank videos from ffmpeg (I haven’t tried VLC because I don’t currently have access to these machines). I need to automate - but I could potentially leave a user logged in. I just need to have some way to capture webcam to disk with some amount of reasonable compression.

    I naively thought I could throw something like the above into a cron job and I’d be good to go (intending to send a SIGINT to end recording). But anything that can be automatically scheduled somehow would be great.

    EDIT : Below is an approach I’m trying using ffmpeg. You can see from the output that I can’t figure out how to specify pixel_format in a way that ffmpeg pays attention to ! First, the command (using mkv because that seems to be a "low-stress" format, but have also tried mov and mp4) :

    ffmpeg -hwaccel vaapi -vaapi_device /dev/dri/renderD128 \
     -f v4l2 -framerate 30 -video_size hd720 -pixel_format yuv420p -i /dev/video1 output.mkv

    Like I said, I’m trying to get hardware acceleration, and you can see below that VAAPI is working (but I think just for decoding). You can easily remove the options from the first line, and I get similar results either way. I didn’t include the header with compile options and library versions, as it’s standard Ubuntu 17.10.

    libva info: VA-API version 0.40.0
    libva info: va_getDriverName() returns 0
    libva info: Trying to open /usr/lib/x86_64-linux-gnu/dri/i965_drv_video.so
    libva info: Found init function __vaDriverInit_0_40
    libva info: va_openDriver() returns 0
    Input #0, video4linux2,v4l2, from '/dev/video1':
     Duration: N/A, start: 42437.238243, bitrate: 442368 kb/s
       Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 1280x720, 442368 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc
    File 'output.mkv' already exists. Overwrite ? [y/N] y
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
    Press [q] to stop, [?] for help
    No pixel format specified, yuv422p for H.264 encoding chosen.
    Use -pix_fmt yuv420p for compatibility with outdated media players.
    [libx264 @ 0x55d1a26a71a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 AVX2 LZCNT BMI2
    [libx264 @ 0x55d1a26a71a0] profile High 4:2:2, level 3.1, 4:2:2 8-bit
    [libx264 @ 0x55d1a26a71a0] 264 - core 148 r2795 aaa9aa8 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, matroska, to 'output.mkv':
     Metadata:
       encoder         : Lavf57.71.100
       Stream #0:0: Video: h264 (libx264) (H264 / 0x34363248), yuv422p, 1280x720, q=-1--1, 30 fps, 1k tbn, 30 tbc
       Metadata:
         encoder         : Lavc57.89.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    Past duration 0.717049 too large
    Past duration 0.879128 too large
    frame=  567 fps= 16 q=27.0 size=    2156kB time=00:00:34.16 bitrate= 516.9kbits/s speed=0.938x

    I exit with ctrl-C. Which results in what appears to be an orderly exit :

    [libx264 @ 0x55d1a26a71a0] frame I:11    Avg QP:15.75  size: 18573
    [libx264 @ 0x55d1a26a71a0] frame P:2176  Avg QP:19.91  size:  4435
    [libx264 @ 0x55d1a26a71a0] frame B:173   Avg QP:20.00  size:  3232
    [libx264 @ 0x55d1a26a71a0] consecutive B-frames: 90.1%  0.1%  0.6%  9.2%
    [libx264 @ 0x55d1a26a71a0] mb I  I16..4: 34.0% 56.1%  9.8%
    [libx264 @ 0x55d1a26a71a0] mb P  I16..4:  0.1%  1.2%  0.0%  P16..4: 32.7%  3.1%  6.1%  0.0%  0.0%    skip:56.8%
    [libx264 @ 0x55d1a26a71a0] mb B  I16..4:  0.0%  0.3%  0.0%  B16..8: 31.9%  0.7%  0.1%  direct: 1.4%  skip:65.6%  L0:41.8% L1:57.9% BI: 0.3%
    [libx264 @ 0x55d1a26a71a0] 8x8 transform intra:81.2% inter:92.4%
    [libx264 @ 0x55d1a26a71a0] coded y,uvDC,uvAC intra: 25.4% 20.1% 2.1% inter: 10.2% 7.4% 0.0%
    [libx264 @ 0x55d1a26a71a0] i16 v,h,dc,p: 78% 10%  7%  5%
    [libx264 @ 0x55d1a26a71a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  7%  6% 72%  2%  3%  3%  2%  2%  3%
    [libx264 @ 0x55d1a26a71a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 34% 21% 25%  4%  5%  3%  4%  1%  3%
    [libx264 @ 0x55d1a26a71a0] i8c dc,h,v,p: 69% 14% 15%  2%
    [libx264 @ 0x55d1a26a71a0] Weighted P-Frames: Y:1.7% UV:0.1%
    [libx264 @ 0x55d1a26a71a0] ref P L0: 49.3%  2.7% 29.6% 18.1%  0.3%
    [libx264 @ 0x55d1a26a71a0] ref B L0: 69.3% 24.1%  6.6%
    [libx264 @ 0x55d1a26a71a0] ref B L1: 86.6% 13.4%
    [libx264 @ 0x55d1a26a71a0] kb/s:529.95
    Exiting normally, received signal 2.