Recherche avancée

Médias (1)

Mot : - Tags -/publishing

Autres articles (75)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Automated installation script of MediaSPIP

    25 avril 2011, par

    To overcome the difficulties mainly due to the installation of server side software dependencies, an "all-in-one" installation script written in bash was created to facilitate this step on a server with a compatible Linux distribution.
    You must have access to your server via SSH and a root account to use it, which will install the dependencies. Contact your provider if you do not have that.
    The documentation of the use of this installation script is available here.
    The code of this (...)

Sur d’autres sites (8223)

  • How do I download the image with metadata in Python ? [closed]

    19 octobre 2024, par Temp Account

    I am downloading some images in Python from the Airtable API and am trying to make a slideshow with them using ffmpeg. I download the images :

    


    urllib2.urlretrieve(img['url'], "output/images/image_"+str(i)+".jpeg")


    


    However, when I run the following ffmpeg command

    


    ffmpeg -framerate 4/60 -i output/images/image_%d.jpeg output/out.mp4


    


    I get the following error :

    


    ffmpeg version 6.1.1-3ubuntu5 Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 13 (Ubuntu 13.2.0-23ubuntu3)
  configuration: --prefix=/usr --extra-version=3ubuntu5 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-l
inux-gnu --arch=amd64 --enable-gpl --disable-stripping --disable-omx --enable-gnutls --enable-libaom --enable-libass --enable-libbs2b --enable
-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribi
di --enable-libglslang --enable-libgme --enable-libgsm --enable-libharfbuzz --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enab
le-libopenmpt --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheo
ra --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libx
vid --enable-libzimg --enable-openal --enable-opencl --enable-opengl --disable-sndio --enable-libvpl --disable-libmfx --enable-libdc1394 --ena
ble-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-ladspa --enable-libbluray --enable-libjack --enable-libpulse --enable-librabbitmq --enable-librist --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libx264 --enable-libzmq --enable-libzvbi --enab
le-lv2 --enable-sdl2 --enable-libplacebo --enable-librav1e --enable-pocketsphinx --enable-librsvg --enable-libjxl --enable-shared
  libavutil      58. 29.100 / 58. 29.100
  libavcodec     60. 31.102 / 60. 31.102
  libavformat    60. 16.100 / 60. 16.100
  libavdevice    60.  3.100 / 60.  3.100
  libavfilter     9. 12.100 /  9. 12.100
  libswscale      7.  5.100 /  7.  5.100
  libswresample   4. 12.100 /  4. 12.100
  libpostproc    57.  3.100 / 57.  3.100
[mjpeg @ 0x593283e5e3c0] bits 150 is invalid
[mjpeg @ 0x593283e5e3c0] bits 28 is invalid
[image2 @ 0x593283e5d380] Could not find codec parameters for stream 0 (Video: mjpeg (Lossless), none(bt470bg/unknown/unknown), lossless): uns
pecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'output/images/image_%d.jpeg':
  Duration: 00:01:00.00, start: 0.000000, bitrate: N/A
  Stream #0:0: Video: mjpeg (Lossless), none(bt470bg/unknown/unknown), lossless, 0.07 fps, 0.07 tbr, 0.07 tbn
File 'output/out.mp4' already exists. Overwrite? [y/N] y
Stream mapping:
  Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (cf)
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (c8)
[mjpeg @ 0x593283e5f180] bits 150 is invalid
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] bits 28 is invalid
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (ce)
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (c6)
[mjpeg @ 0x593283e5f180] unable to decode APP fields: Invalid data found when processing input
    Last message repeated 1 times
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] unable to decode APP fields: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] invalid id 255
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Error while filtering: Invalid data found when processing input
[vist#0:0/mjpeg @ 0x593283e5f000] Decode error rate 1 exceeds maximum 0.666667
[out#0/mp4 @ 0x593283e603c0] Nothing was written into output file, because at least one of its streams received no packets.
frame=    0 fps=0.0 q=0.0 Lsize=       0kB time=N/A bitrate=N/A speed=N/A    
Conversion failed!



    


    However, downloading the images in Chrome then creating the slideshow is successful. The images from Chrome have metadata of the filetype (JPEG), width and height. The images downloaded with Python have no metadata. How do I download that information so that my ffmpeg command will succeed ?

    


    Thanks !

    


  • Precise method of segmenting & transcoding video+audio (via ffmpeg), into an on-demand HLS stream ?

    17 novembre 2019, par Felix

    recently I’ve been messing around with FFMPEG and streams through Nodejs. My ultimate goal is to serve a transcoded video stream - from any input filetype - via HTTP, generated in real-time as it’s needed in segments.

    I’m currently attempting to handle this using HLS. I pre-generate a dummy m3u8 manifest using the known duration of the input video. It contains a bunch of URLs that point to individual constant-duration segments. Then, once the client player starts requesting the individual URLs, I use the requested path to determine which time range of video the client needs. Then I transcode the video and stream that segment back to them.

    Now for the problem : This approach mostly works, but has a small audio bug. Currently, with most test input files, my code produces a video that - while playable - seems to have a very small (< .25 second) audio skip at the start of each segment.

    I think this may be an issue with splitting using time in ffmpeg, where possibly the audio stream cannot be accurately sliced at the exact frame the video is. So far, I’ve been unable to figure out a solution to this problem.

    If anybody has any direction they can steer me - or even a prexisting library/server that solves this use-case - I appreciate the guidance. My knowledge of video encoding is fairly limited.

    I’ll include an example of my relevant current code below, so others can see where I’m stuck. You should be able to run this as a Nodejs Express server, then point any HLS player at localhost:8080/master to load the manifest and begin playback. See the transcode.get('/segment/:seg.ts' line at the end, for the relevant transcoding bit.

    'use strict';
    const express = require('express');
    const ffmpeg = require('fluent-ffmpeg');
    let PORT = 8080;
    let HOST = 'localhost';
    const transcode = express();


    /*
    * This file demonstrates an Express-based server, which transcodes &amp; streams a video file.
    * All transcoding is handled in memory, in chunks, as needed by the player.
    *
    * It works by generating a fake manifest file for an HLS stream, at the endpoint "/m3u8".
    * This manifest contains links to each "segment" video clip, which browser-side HLS players will load as-needed.
    *
    * The "/segment/:seg.ts" endpoint is the request destination for each clip,
    * and uses FFMpeg to generate each segment on-the-fly, based off which segment is requested.
    */


    const pathToMovie = 'C:\\input-file.mp4';  // The input file to stream as HLS.
    const segmentDur = 5; //  Controls the duration (in seconds) that the file will be chopped into.


    const getMetadata = async(file) => {
       return new Promise( resolve => {
           ffmpeg.ffprobe(file, function(err, metadata) {
               console.log(metadata);
               resolve(metadata);
           });
       });
    };



    // Generate a "master" m3u8 file, which the player should point to:
    transcode.get('/master', async(req, res) => {
       res.set({"Content-Disposition":"attachment; filename=\"m3u8.m3u8\""});
       res.send(`#EXTM3U
    #EXT-X-STREAM-INF:BANDWIDTH=150000
    /m3u8?num=1
    #EXT-X-STREAM-INF:BANDWIDTH=240000
    /m3u8?num=2`)
    });

    // Generate an m3u8 file to emulate a premade video manifest. Guesses segments based off duration.
    transcode.get('/m3u8', async(req, res) => {
       let met = await getMetadata(pathToMovie);
       let duration = met.format.duration;

       let out = '#EXTM3U\n' +
           '#EXT-X-VERSION:3\n' +
           `#EXT-X-TARGETDURATION:${segmentDur}\n` +
           '#EXT-X-MEDIA-SEQUENCE:0\n' +
           '#EXT-X-PLAYLIST-TYPE:VOD\n';

       let splits = Math.max(duration / segmentDur);
       for(let i=0; i&lt; splits; i++){
           out += `#EXTINF:${segmentDur},\n/segment/${i}.ts\n`;
       }
       out+='#EXT-X-ENDLIST\n';

       res.set({"Content-Disposition":"attachment; filename=\"m3u8.m3u8\""});
       res.send(out);
    });

    // Transcode the input video file into segments, using the given segment number as time offset:
    transcode.get('/segment/:seg.ts', async(req, res) => {
       const segment = req.params.seg;
       const time = segment * segmentDur;

       let proc = new ffmpeg({source: pathToMovie})
           .seekInput(time)
           .duration(segmentDur)
           .outputOptions('-preset faster')
           .outputOptions('-g 50')
           .outputOptions('-profile:v main')
           .withAudioCodec('aac')
           .outputOptions('-ar 48000')
           .withAudioBitrate('155k')
           .withVideoBitrate('1000k')
           .outputOptions('-c:v h264')
           .outputOptions(`-output_ts_offset ${time}`)
           .format('mpegts')
           .on('error', function(err, st, ste) {
               console.log('an error happened:', err, st, ste);
           }).on('progress', function(progress) {
               console.log(progress);
           })
           .pipe(res, {end: true});
    });

    transcode.listen(PORT, HOST);
    console.log(`Running on http://${HOST}:${PORT}`);
  • using mediamtx to stream video to browser [closed]

    18 octobre 2024, par Maximilian

    I am using rtsp to feed a video stream to mediamtx.

    &#xA;

    I want to display the feed in a browser but always get some errors.

    &#xA;

    I use the following to start mediamtx

    &#xA;

    podman run --rm -it -e MTX_PROTOCOLS=tcp -e MTX_WEBRTCADDITIONALHOSTS=192.168.x.x -p 8554:8554 -p 1935:1935 -p 8888:8888 -p 8889:8889 -p 8890:8890/udp -p 8189:8189/udp docker.io/bluenviron/mediamtx&#xA;

    &#xA;

    I use this ffmpeg line to feed mediamtx :

    &#xA;

    ffmpeg -i "rtsp://127.0.0.1:10000/test" -f rtsp rtsp://127.0.0.1:8554/feed&#xA;

    &#xA;

    I can view the feed with

    &#xA;

    ffplay  rtsp://localhost:8554/feed&#xA;

    &#xA;

    ffplay says some things about my stream

    &#xA;

    the mediamtx stream

    &#xA;

    Input #0, rtsp, from &#x27;rtsp://localhost:8554/feed&#x27;:&#xA;  Metadata:&#xA;    title           : Session streamed with GStreamer&#xA;  Duration: N/A, start: 0.000000, bitrate: N/A&#xA;  Stream #0:0: Video: mpeg4 (Simple Profile), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 30 tbr, 90k tbn, 30 tbc&#xA;

    &#xA;

    the source stream

    &#xA;

    Input #0, rtsp, from &#x27;rtsp://localhost:10000/test&#x27;:&#xA;  Metadata:&#xA;    title           : Session streamed with GStreamer&#xA;    comment         : rtsp-server&#xA;  Duration: N/A, start: 0.199989, bitrate: N/A&#xA;  Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(tv, smpte170m, progressive), 320x240 [SAR 1:1 DAR 4:3], 30 fps, 30 tbr, 90k tbn, 60 tbc&#xA;

    &#xA;

    I use this html code try to display the video :

    &#xA;

    &#xA;    &#xA;&#xA;

    &#xA;

    mediamtx gives the following output when the browser tries to connect :

    &#xA;

    2024/10/18 07:52:51 INF [RTSP] [session 7fe6a91a] created by 10.0.2.100:57542&#xA;2024/10/18 07:52:51 INF [RTSP] [session 7fe6a91a] is publishing to path &#x27;feed&#x27;, 1 track (MPEG-4 Video)&#xA;2024/10/18 07:52:52 INF [WebRTC] [session 9d4fb884] created by 10.0.2.100:55070&#xA;2024/10/18 07:52:52 INF [WebRTC] [session 9d4fb884] closed: the stream doesn&#x27;t contain any supported codec, which are currently AV1, VP9, VP8, H264, Opus, G722, G711, LPCM&#xA;

    &#xA;

    The stream is clearly h264... why is it complaining ?

    &#xA;