Recherche avancée

Médias (1)

Mot : - Tags -/remix

Autres articles (51)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (8842)

  • FFMPEG - how to transcode input stream while cutting off first few seconds of video and audio

    25 septembre 2020, par purplepear24

    I am using ffmpeg to transcode a screen-record (x11) input stream to MP4. I would like to cut off the first 10 seconds of the stream, which is just a blank screen (this is intentional).

    


    I understand how to trim video with ffmpeg when converting from mp4 to another mp4, but i can't find any working solution for processing an input stream while accounting for delay and audio/video syncing.

    


    Here is my current code :

    


    const { spawn } = require('child_process');
const { S3Uploader } = require('./utils/upload');

const MEETING_URL = process.env.MEETING_URL || 'Not present in environment';
console.log(`[recording process] MEETING_URL: ${MEETING_URL}`);

const args = process.argv.slice(2);
const BUCKET_NAME = args[0];
console.log(`[recording process] BUCKET_NAME: ${BUCKET_NAME}`);
const BROWSER_SCREEN_WIDTH = args[1];
const BROWSER_SCREEN_HEIGHT = args[2];
const MEETING_ID = args[3];
console.log(`[recording process] BROWSER_SCREEN_WIDTH: ${BROWSER_SCREEN_WIDTH}, BROWSER_SCREEN_HEIGHT: ${BROWSER_SCREEN_HEIGHT}, TASK_NUMBER: 43`);

const VIDEO_BITRATE = 3000;
const VIDEO_FRAMERATE = 30;
const VIDEO_GOP = VIDEO_FRAMERATE * 2;
const AUDIO_BITRATE = '160k';
const AUDIO_SAMPLERATE = 44100;
const AUDIO_CHANNELS = 2
const DISPLAY = process.env.DISPLAY;

const transcodeStreamToOutput = spawn('ffmpeg',[
    '-hide_banner',
    '-loglevel', 'error',
    // disable interaction via stdin
    '-nostdin',
    // screen image size
    // '-s', `${BROWSER_SCREEN_WIDTH}x${BROWSER_SCREEN_HEIGHT}`,
    '-s', '1140x720',
    // video frame rate
    '-r', `${VIDEO_FRAMERATE}`,
    // hides the mouse cursor from the resulting video
    '-draw_mouse', '0',
    // grab the x11 display as video input
    '-f', 'x11grab',
    '-i', ':1.0+372,8',
    // '-i', `${DISPLAY}`,
    // grab pulse as audio input
    '-f', 'pulse', 
        '-ac', '2',
        '-i', 'default',
    // codec video with libx264
    '-c:v', 'libx264',
        '-pix_fmt', 'yuv420p',
        '-profile:v', 'main',
        '-preset', 'veryfast',
        '-x264opts', 'nal-hrd=cbr:no-scenecut',
        '-minrate', `${VIDEO_BITRATE}`,
        '-maxrate', `${VIDEO_BITRATE}`,
        '-g', `${VIDEO_GOP}`,
    // apply a fixed delay to the audio stream in order to synchronize it with the video stream
    '-filter_complex', 'adelay=delays=1000|1000',
    // codec audio with aac
    '-c:a', 'aac',
        '-b:a', `${AUDIO_BITRATE}`,
        '-ac', `${AUDIO_CHANNELS}`,
        '-ar', `${AUDIO_SAMPLERATE}`,
    // adjust fragmentation to prevent seeking(resolve issue: muxer does not support non seekable output)
    '-movflags', 'frag_keyframe+empty_moov+faststart',
    // set output format to mp4 and output file to stdout
    '-f', 'mp4', '-'
    ]
);

transcodeStreamToOutput.stderr.on('data', data => {
    console.log(`[transcodeStreamToOutput process] stderr: ${(new Date()).toISOString()} ffmpeg: ${data}`);
});

const timestamp = new Date();
const year = timestamp.getFullYear();
const month = timestamp.getMonth() + 1;
const day = timestamp.getDate();
const hour = timestamp.getUTCHours();
console.log(MEETING_ID);
const fileName = `${year}/${month}/${day}/${hour}/${MEETING_ID}.mp4`;
new S3Uploader(BUCKET_NAME, fileName).uploadStream(transcodeStreamToOutput.stdout);

// event handler for docker stop, not exit until upload completes
process.on('SIGTERM', (code, signal) => {
    console.log(`[recording process] exited with code ${code} and signal ${signal}(SIGTERM)`);
    process.kill(transcodeStreamToOutput.pid, 'SIGTERM');
});

// debug use - event handler for ctrl + c
process.on('SIGINT', (code, signal) => {
    console.log(`[recording process] exited with code ${code} and signal ${signal}(SIGINT)`)
    process.kill('SIGTERM');
});

process.on('exit', function(code) {
    console.log('[recording process] exit code', code);
});


    


    Any help would be greatly appreciated !

    


  • Matplotlib use Ffmpeg to save plot to be mp4 not include full step

    21 décembre 2020, par 昌翰余

    I use ffmpeg to store the dynamic graph drawn on matplotlib, but the output file is only 2 seconds
but It should have been 30 seconds.
I set a graph to run three curves, a total of 30 seconds of data,
the graph that ran on the py file is normal,
but the output is only the first two seconds of the output.
May I ask if I missed something

    


    Below is my code

    


    import matplotlib.pyplot as plt
from matplotlib import animation
from numpy import random 
import pandas as pd
from matplotlib.animation import FFMpegWriter

FFwriter=animation.FFMpegWriter(fps=30, extra_args=['-vcodec', 'libx264'])
data = pd.read_csv('apple1.csv', delimiter = ',', dtype = None)
data = data.values
AccX1=[]
AccY1=[]
AccZ1=[]
AccX2=[]
AccY2=[]
AccZ2=[]

time = []

for i in range(600):
        AccX1.append(data[i][8])
        AccY1.append(data[i][9])
        AccZ1.append(data[i][10])
        AccX2.append(data[i][24])
        AccY2.append(data[i][25])
        AccZ2.append(data[i][26])
        
        time.append(data[i][0])
        
fig = plt.figure()
ax1 = plt.axes(xlim=(0,3000), ylim=(6,-6))
line, = ax1.plot([], [], lw=2)
plt.xlabel('ACC')
plt.ylabel('Time')

plotlays, plotcols = [3], ["r","g","b"]
lines = []
for index in range(3):
    lobj = ax1.plot([],[],lw=2,color=plotcols[index])[0]
    lines.append(lobj)


def init():
    for line in lines:
        line.set_data([],[])
    return lines

x1,y1 = [],[]
x2,y2 = [],[]
x3,y3 = [],[]



i=0

def animate(frame):
    global i
    
    i+=1
    x = i
    y = AccX1[i]

    x1.append(x)
    y1.append(y)

    x = i
    y = AccY1[i]
    x2.append(x)
    y2.append(y)

    x = i
    y = AccZ1[i]
    x3.append(x)
    y3.append(y)
    

    xlist = [x1, x2,x3]
    ylist = [y1, y2,y3]


    for lnum,line in enumerate(lines):
        line.set_data(xlist[lnum], ylist[lnum]) 


    return lines


anim = animation.FuncAnimation(fig, animate,
                    init_func=init, blit=True,interval=10)
anim.save('test.mp4',writer=FFwriter)
plt.show()


    


    The dynamic picture ran out using plt.show is correct.
And I don't think I have set the length of storage. Did I add something ?

    


  • FFMPEG sync video from image2pipe with audio from RTMP

    4 juin 2021, par AndreaC

    I have an rtmp url, and i need to put overlay using opencv and keeping audio.

    


    It works well, exept for syncronization between audio and video.

    


    with this command

    


    ffmpeg -re -y -f image2pipe -i - -i rtmp://192.168.178.32:1935/opencv-hls/test -map 0:v:0 -vcodec libx264 -g 50 -keyint_min 50 -map 1:a:0 -b:a 128k -f mpegts -codec:v mpeg1video -b:v 700k -s 1024x576 -bf 0 -q 1 http://localhost:3000/mystream


    


    I open an image2pipe where i send my frame with overlay, and i use the same input of opencv as second command for ffmpeg to acquire audio and send all to mpegts url

    


    This is the ffmpeg output

    


    ffmpeg version 4.1.4-1build2 Copyright (c) 2000-2019 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.2.1-4ubuntu1)
  configuration: --prefix=/usr --extra-version=1build2 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
  libpostproc    55.  3.100 / 55.  3.100
Input #0, image2pipe, from 'pipe:':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1024x576 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 25 tbn, 25 tbc
Input #1, flv, from 'rtmp://192.168.178.32:1935/opencv-hls/test':
  Metadata:
    Server          : NGINX RTMP (github.com/arut/nginx-rtmp-module)
    displayWidth    : 1024
    displayHeight   : 576
    fps             : 0
    profile         : 
    level           : 
  Duration: 00:00:00.00, start: 4.639000, bitrate: N/A
    Stream #1:0: Audio: aac (LC), 44100 Hz, stereo, fltp
    Stream #1:1: Video: h264 (Constrained Baseline), yuv420p(progressive), 1024x576, 29.58 fps, 29.58 tbr, 1k tbn
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> mpeg1video (native))
  Stream #1:0 -> #0:1 (aac (native) -> mp2 (native))
[swscaler @ 0x55f256d63980] deprecated pixel format used, make sure you did set range correctly
Output #0, mpegts, to 'http://localhost:3000/mystream':
  Metadata:
    encoder         : Lavf58.20.100
    Stream #0:0: Video: mpeg1video, yuv420p(progressive), 1024x576 [SAR 1:1 DAR 16:9], q=2-31, 700 kb/s, 25 fps, 90k tbn, 25 tbc
    Metadata:
      encoder         : Lavc58.35.100 mpeg1video
    Side data:
      cpb: bitrate max/min/avg: 0/0/700000 buffer size: 0 vbv_delay: -1
    Stream #0:1: Audio: mp2, 44100 Hz, stereo, s16, 128 kb/s
    Metadata:
      encoder         : Lavc58.35.100 mp2


    


    I always have a delay between audio and video and it increese with time.
Do you have any idea on how to sync them ?

    


    thanks
Andrea