Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (62)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

Sur d’autres sites (6957)

  • Problem using ffmpeg in python to decode video stream to yuv stream and send to a pipe

    30 août 2019, par 瓦达所多阿萨德阿萨德

    This command works nice and fast in shell :

    ffmpeg -c:v h264_cuvid -i ./myvideo.mp4 -f null -pix-fmt yuv420p -

    It was about 13x speed or 300 frames/sec.
    Then I tried to send the yuv stream to pipe and catch it the main process using the following code in python :

    cmd = ['ffmpeg', '-c:v', 'h264_cuvid', '-i', './myvideo.mp4', '-f', 'image2pipe', '-pix_fmt', 'yuv420p', '-']
    p = subprocess.Popen(cmd, stdout = subprocess.PIPE, stderr=subprocess.PIPE)
    size_Y = int(height * width)
    size_UV = int(size_Y / 4)
    s = time.time()
    Y = p.stdout.read(size_Y)
    U = p.stdout.read(size_UV)
    V = p.stdout.read(size_UV)
    print('read time: ', time.time() - s)

    However, this took seconds to read just one yuv frame. What wronged here ? Im not sure what ffmpeg was sending into the pipe, the yuv planar frames or just the pointers to data planes ?

    the console output :

    [('Number of Frames', 61137), ('FPS', 25.0), ('Frame Shape', (1920, 1080))]
    --Number of Frames: 61137
    --FPS: 25.0
    --Frame Shape: (1920, 1080)
    cmd:  ['ffmpeg', '-c:v', 'h264_cuvid', '-i', './myvideo.mp4', '-f', 'image2pipe', '-pix_fmt', 'yuv420p', '-']
    read time:  5.251002073287964
    1/61137 (0.00%)read time:  2.290238618850708
    2/61137 (0.00%)read time:  1.2984871864318848
    3/61137 (0.00%)read time:  2.2100613117218018
    4/61137 (0.01%)read time:  2.3444178104400635
  • Pipe FFMPEG MPEG-DASH livestream to AWS S3

    17 août 2019, par Alexander

    So I’m currently trying to livestream the rendering of a GPU-heavy video (renders about 1fps), encode it to a 30fps MPEG-DASH livestream and output this to AWS S3 so Shaka Player can display the live rendering.

    The first issue is that the livestream keeps looping, it doesn’t stop after the rendering for loop is done.

    I use a python script to pipe the output of the rendering to FFMPEG, and pipe the output of FFMPEG to the aws s3 cli like this :

    p1 = Popen(['ffmpeg', '-y', '-hwaccel', 'cuvid', '-f', 'image2pipe', '-r', '24', '-i', '-', '-c:v', 'h264_nvenc', '-b:v', '5M', '-f', 'dash', '-movflags', 'frag_keyframe+empty_moov', '-'], stdin=PIPE)#, shell=True) #'-method', 'PUT', 'https://example.s3.amazonaws.com/test1/test1.mpd'], stdin=PIPE)

    p2 = Popen(['aws', 's3', 'cp', '-', 's3://example/test1/test1.mpd'], stdin=p1.stdout)


    #The following commented aws s3 sync command uploads successfully to S3
    #but the issue here is that it stops after the syncing is done and its hacky
    #p1 = Popen(['ffmpeg', '-y', '-vsync', '0', '-hwaccel', 'cuvid', '-f', 'image2pipe', '-r', '24', '-i', '-', '-c:v', 'h264_nvenc', '-b:v', '5M', '-f', 'dash', '-movflags', 'frag_keyframe+empty_moov', 'test2.mpd'], stdin=PIPE)#, shell=True) #'-method', 'PUT', 'https://teststream.s3.amazonaws.com/test1/test1.mpd'], stdin=PIPE)
    #p2 = Popen(['aws', 's3', 'sync', '.', 's3://teststream/test1', '--exclude', '"*"', '--include', '"*.m4s"', '--include', '"*.mpd"'], stdin=PIPE)

    #pseudocode
    for ci,(content,contentName) in enumerate(content_loader):
       im = renderframe(content)
       im.save(p1.stdin, 'PNG')

    p1.stdin.close()
    p1.wait()
    p2.stdin.close()
    p2.wait()
  • FFMPEG - Pipe PCM to STDOUT in real-time for Node.js

    20 août 2019, par bloom.510

    I am able to stream realtime PCM data from my system’s loopback driver that I can either encode raw or in WAV format using FFMPEG.

    How can I pipe the PCM to stdout as its being recorded in real-time ?

    I’m batting around in the dark here. So far I’ve tried logging stdout in Node.js, as well as creating a named pipe and listening for changes to it. None of these has returned any output.

    The basic shell command captures the audio :

    ffmpeg -f alsa -i loopout -f s16le -acodec pcm_s16le out.raw

    Using child_process.spawn() in Node.js :

    let ffmpeg = spawn('ffmpeg', [
       '-f', 'alsa', '-ac', '2', '-ar', '44100', '-i',
       'loopout', '-f', 's16le', '-acodec', 'pcm_s16le', 'out.raw',
    ]);

    However :

    // this never logs anything
    ffmpeg.stdout.on('data', (data) => {
       console.log(data.toString());
    });

    // this outputs what you would see in the terminal window
    ffmpeg.stderr.on('data', (data) => {
       console.log(data.toString());
    });

    Is there a way to access a readable stream of this file as its being created ?

    Or perhaps there is a way to stream the PCM to an RTP server and forward it as a buffer over UDP to an Express server ?

    Whatever the methodology is, the ultimate goal is to access it as a stream in Node.js and convert it into an ArrayBuffer.