Recherche avancée

Médias (0)

Mot : - Tags -/publication

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (52)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (12301)

  • Bash script for splitting video using ffmpeg outputs wrong video lengths

    19 juin 2018, par jonny

    I have a video that’s x seconds long. I want to split that video up into equal segments, where each segment is no longer than a minute. To do that, I cobbled together a fairly simple bash script which uses ffmprobe to get the duration of the video, find out how long each segment should be, and then iteratively split the video up using ffmpeg :

    INPUT_FILE=$1
    INPUT_DURATION="$(./bin/ffprobe.exe -i "$INPUT_FILE" -show_entries format=duration -v quiet -of csv="p=0")"

    NUM_SPLITS="$(perl -w -e "use POSIX; print ceil($INPUT_DURATION/60), qq{\n}")"

    printf "\nVideo duration: $INPUT_DURATION; "
    printf "Number of videos to output: $NUM_SPLITS; "
    printf "Approximate length of each video: $(echo "$INPUT_DURATION" "$NUM_SPLITS" | awk '{print ($1 / $2)}')\n\n"

    for i in `seq 1 "$NUM_SPLITS"`; do
       START="$(echo "$INPUT_DURATION" "$NUM_SPLITS" "$i" | awk '{print (($1 / $2) * ($3 - 1))}')"
       END="$(echo "$INPUT_DURATION" "$NUM_SPLITS" "$i" | awk '{print (($1 / $2) * $3)}')"

       echo ./bin/ffmpeg.exe -v quiet -y -i "$INPUT_FILE" \
       -vcodec copy -acodec copy -ss "$START" -t "$END" -sn test_${i}.mp4

       ./bin/ffmpeg.exe -v quiet -y -i "$INPUT_FILE" \
       -vcodec copy -acodec copy -ss "$START" -t "$END" -sn test_${i}.mp4
    done

    printf "\ndone\n"

    If I run that script on the 30MB / 02:50 duration Big Buck Bunny sample from here, the output of the program would suggest the videos should all be of equal length :

    λ bash split.bash .\media\SampleVideo_1280x720_30mb.mp4

    Video duration: 170.859000; Number of videos to output: 3; Approximate length of each video: 56.953

    ./bin/ffmpeg.exe -v quiet -y -i .\media\SampleVideo_1280x720_30mb.mp4 -vcodec copy -acodec copy -ss 0 -t 56.953 -sn test_1.mp4
    ./bin/ffmpeg.exe -v quiet -y -i .\media\SampleVideo_1280x720_30mb.mp4 -vcodec copy -acodec copy -ss 56.953 -t 113.906 -sn test_2.mp4
    ./bin/ffmpeg.exe -v quiet -y -i .\media\SampleVideo_1280x720_30mb.mp4 -vcodec copy -acodec copy -ss 113.906 -t 170.859 -sn test_3.mp4

    done

    As the duration of each partial video, i.e. the time between -ss and -t, are equal for each subsequent ffmpeg command. But the durations I get are closer to :

    test_1.mp4 = 00:56
    test_2.mp4 = 01:53
    test_3.mp4 = 00:56

    Where the contents of each partial video overlap. What am I missing here ?

  • ffmpeg converted video from s3 bucket downloads before playing the video

    12 octobre 2020, par Rutu

    I making a video streaming application with react and node js.
I am converting video into different resolution with ffmpeg and storing directly to S3 bucket with the help of piping.

    


    I am streaming uploaded resolution video from S3 bucket directly through cloudfront in HTML5 tag with video.js

    


    When i tried to play original video through cloudfront in player its working fine, But as soon i play ffmpeg converted video to video player in cloudfront i am facing issue :

    


    Video takes a lot load (Downloads in browser) before playing in player.

    


    Below is my ffmpeg command

    


    await loadProcess( [&#xA;&#x27;i&#x27;,<s3 url="url" of="of" original="original" video="video">,&#xA;  &#x27;-movflags&#x27;,&#xA;  &#x27;frag_keyframe&#x2B;empty_moov&#x27;,&#x27;-vf&#x27;, &#x27;scale=-2:360&#x27;,&#x27;-c:v&#x27;,&#x27;h264&#x27;,&#x27;-profile:v&#x27;,&#x27;baseline&#x27;,&#x27;-r&#x27;,30,&#x27;-g&#x27;, 60,&#x27;-b:v&#x27;,&#x27;1M&#x27;,&#x27;-f&#x27;,&#x27;mp4&#x27;,&#x27;-&#x27;]&#xA;, outPath,&#x27;video/mp4&#x27;)&#xA;&#xA;</s3>

    &#xA;

    this is my loadProcess function : i am using aws cli to direct upload resolution video to S3 bucket :

    &#xA;

    export function loadProcess(ffmpegOptions, outPath,contentType) {&#xA;    return new Promise((resolve, reject) => {&#xA;        const  videoPath = outPath.replace(/\\/g, "/");&#xA;        let ffmpeg = spawn(conf.ffmpeg, ffmpegOptions);&#xA;        let awsPipe = spawn(&#x27;aws&#x27;, [&#x27;s3&#x27;, &#x27;cp&#x27;, &#x27;--content-type&#x27;, `${contentType}`, &#x27;-&#x27;, `s3://${process.env.AWS_S3_BUCKET}/${process.env.AWS_S3_VIDEOS_FOLDER}${videoPath}` ])&#xA;        ffmpeg.stdout.pipe(awsPipe.stdin)&#xA;&#xA;        // ffmpeg write stream flow&#xA;        let ffmpegData = &#x27;&#x27;&#xA;        ffmpeg.stderr.on(&#x27;data&#x27;, (_data) => {&#xA;            ffmpegData &#x2B;= _data.toString();&#xA;        })&#xA;        ffmpeg.on(&#x27;close&#x27;, (code) => {&#xA;            if (code === 0) {&#xA;                resolve(outPath)&#xA;            }&#xA;            else {&#xA;                let _dataSplit=ffmpegData.split(&#x27;\n&#x27;);&#xA;                _dataSplit.pop();&#xA;                console.log({action: &#x27;close&#x27;, message:_dataSplit.pop(), more:[conf.ffmpeg].concat(ffmpegOptions).join(&#x27; &#x27;)&#x2B;&#x27;\n&#x27;&#x2B;ffmpegData, code})&#xA;            }&#xA;        });&#xA;        ffmpeg.on(&#x27;error&#x27;, (err) => {&#xA;            reject({action: &#x27; ffmpeg error&#x27;, message: err});&#xA;        });&#xA;        &#xA;        // aws s3 cli read stream pipe flow&#xA;        let awsPipeData = &#x27;&#x27;&#xA;        awsPipe.stderr.on(&#x27;data&#x27;, _data => {&#xA;            awsPipeData &#x2B;= _data.toString()&#xA;        });&#xA;        awsPipe.on(&#x27;close&#x27;, (code) => {&#xA;            if (code === 0) {&#xA;                resolve(outPath)&#xA;            }else{&#xA;                console.log({action: &#x27;close&#x27;, message: awsPipeData})&#xA;            }&#xA;        });&#xA;        awsPipe.on(&#x27;error&#x27;, (err) => {&#xA;            reject({action: &#x27;awsPipe error&#x27;, message: err});&#xA;        })&#xA;&#xA;    })&#xA;}&#xA;

    &#xA;

    I have tried using +faststart option in ffmpeg but getting command failed error.&#xA;Can someone please help me this ?

    &#xA;

    Thanks in advance

    &#xA;

  • How to create m3u8 playlist from mp4 video url ( stored in amazon S3 ) and store the video chunks ( .ts files) and .m3u8 file back to another S3 ?

    19 mai 2019, par dexter2019

    I am building an application where user can upload video and others can watch them later. I am aiming for HLS streaming of the video on the client side, for which the video format should be .m3u8. I am using node fluent-FFmpeg module to do the processing, however, I have a huge doubt, that, how to ensure that all the .ts files (chunks) are also stored back in s3 bucket along with the m3u8 file after ffmpeg processed the mp4 file ?

    Because the ffmpeg command only takes the location of the m3u8 file ? How handle it when I want the input and output location to be S3 ?

    Any help will be greatly appreciated.

    I am following the answer from this question Ffmpeg creating m3u8 from mp4, video file size , which is working absolutely fine in my local machine, how to achieve the same for s3 ?