Recherche avancée

Médias (1)

Mot : - Tags -/Christian Nold

Autres articles (64)

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (10835)

  • How to save frame from VOB with correct size ?

    18 mars 2017, par Paulo Barretto

    I am saving single frames from a vob file with ffmpeg.

    ffprobe shows for my vob file :

    Stream #0:1[0x1e0] : Video : mpeg2video (Main), yuv420p(tv), 720x480
    [SAR 8:9 DAR 4:3], Closed Captions, 3750 kb/s, 29.97 fps, 29.97 tbr,
    90k tbn, 59.94 tbc

    My command line is

    ffmpeg -i File1.vob -ss 10 -q:v 2 -vframes 1 -an -sn frame10s.jpg

    My jpeg files are being saved with 720x480, horizontally stretched. How can I make them be saved with correct display ratio 640x480 ?

  • How to send encoded video (or audio) data from server to client in a way that's decodable by webcodecs API using minimal latency and data overhead

    11 janvier 2023, par Tiger Yang

    My question (read entire post for context) :

    


    Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this.

    


    Context and problem :

    


    I'm working on a remote desktop implementation that consists of a server that captures and encodes the display and speakers using FFmpeg and forwards it via pipe to a go (language) program which sends it on two unidirectional webtransport streams to my client, which I plan to decode using the webcodecs API. According to MDN, the video decoder needs to be fed via .configure() an object containing the following : https://developer.mozilla.org/en-US/docs/Web/API/VideoDecoder/configure before it's able to decode anything.

    


    same goes for the audio decoder : https://developer.mozilla.org/en-US/docs/Web/API/AudioDecoder/configure

    


    What I've tried so far :

    


    Because this remote desktop will be for my personal use only, it would only ever receive streams from a specific encoder configured in a specific way encoding video at a specific resolution, framerate, color space, etc.. Therefore, I took my video capture FFmpeg command...

    


    videoString := []string{
        "ffmpeg",
        "-init_hw_device", "d3d11va",
        "-filter_complex", "ddagrab=video_size=1920x1080:framerate=60",
        "-vcodec", "hevc_nvenc",
        "-tune", "ll",
        "-preset", "p7",
        "-spatial_aq", "1",
        "-temporal_aq", "1",
        "-forced-idr", "1",
        "-rc", "cbr",
        "-b:v", "500K",
        "-no-scenecut", "1",
        "-g", "216000",
        "-f", "hevc", "-",
    }


    


    ...and instructed it to write to an mp4 file instead of outputting to pipe, and then I had this webcodecs demo https://w3c.github.io/webcodecs/samples/video-decode-display/ demux it using mp4box.js. Knowing that the demo outputs a proper .configure() object, I blindly copied it and had my client configure using that every time. Sadly, it didn't work, and I since noticed that the "description" part of the configure object changes despite the encoder and parameters being the same.

    


    I knew that mp4 files worked via mp4box, but they can't be streamed with low latency over a network, and additionally, ffmpeg's -f parameters specifies the muxer to use, but there are so many different types.

    


    At this point, I think I'm completely out of my depth, so :

    


    Given the unique circumstance of only ever decoding data from a specifically-configured encoder, what is the best way I can send the encoded bitstream along with the bare minimum extra bytes required to properly configure the decoder on the client's end (including only things that change per stream, and omitting things that don't, such as resolution) ? I'm a sucker for zero compromises, and I think I am willing to design my own minimal container format to accomplish this. (copied above)

    


  • Read, process and save video and audio with FFMPEG

    3 mai 2017, par sysseon

    I want to open a video resource with ffmpeg on Python, get the read frames from the pipe, modify them (e.g. put the timestamp with OpenCV) and write the result to an output video file. I also want to save the audio source with no changes.

    My code (with no audio and two processes) :

    import subprocess as sp
    import numpy as np
    # import time
    # import cv2

    FFMPEG_BIN = "C:/ffmpeg/bin/ffmpeg.exe"
    INPUT_VID = 'input.avi'
    res = [320, 240]

    command_in = [FFMPEG_BIN,
                 '-y',  # (optional) overwrite output file if it exists
                 '-i', INPUT_VID,
                 '-f', 'image2pipe',  # image2pipe or rawvideo?
                 '-pix_fmt', 'bgr24',
                 '-vcodec', 'rawvideo',
                 '-']

    command_out = [FFMPEG_BIN,
                  '-y',  # (optional) overwrite output file if it exists
                  '-f', 'rawvideo',
                  '-vcodec', 'rawvideo',
                  '-s', '320x240',
                  '-pix_fmt', 'bgr24',
                  '-r', '25',
                  '-i', '-',
                  # '-i', INPUT_VID,    # Audio
                  '-vcodec', 'mpeg4',
                  'output.mp4']

    pipe_in = sp.Popen(command_in, stdout=sp.PIPE, stderr=sp.PIPE)
    pipe_out = sp.Popen(command_out, stdin=sp.PIPE, stderr=sp.PIPE)

    while True:
       # Read 320*240*3 bytes (= 1 frame)
       raw_image = pipe_in.stdout.read(res[0] * res[1] * 3)
       # Transform the byte read into a numpy array
       image = np.fromstring(raw_image, dtype=np.uint8)
       image = image.reshape((res[1], res[0], 3))
       # Draw some text in the image
       # draw_text(image)

       # Show the image with OpenCV (not working, gray image, why?)
       # cv2.imshow("VIDEO", image)

       # Write image to output process
       pipe_out.stdin.write(image.tostring())

    print 'done'
    pipe_in.kill()
    pipe_out.kill()
    1. Could it be done with just a process ? (Read the input from a file,
      put it in the input pipe, get the image, process it, and put it in
      the output pipe to be saved into a video file)
    2. How can I save the audio ? In this example, I could use ’-i
      INPUT_VID’ in the second process to get the audio channel, but my
      source will be a RTSP, and I don’t want to create a connection for
      each process. Could I put video+audio in the pipe and rescue and
      separate it with numpy ? How ?
    3. I use a loop to process the frames and wait until I get an error.
      How can I check if all frames are already read ?
    4. Not important, but if I try to show the images with OpenCV
      (cv2.imshow(...)), I only see a gray screen. Why ?