Recherche avancée

Médias (91)

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Mise à disposition des fichiers

    14 avril 2011, par

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

Sur d’autres sites (13666)

  • Why does fluent-ffmpeg only work when it throws the error Output stream closed

    29 mars 2024, par volume one

    I am using fluent-ffmpeg to process a video file (and then upload that to Amazon S3). The code is very straightforward but it only works if :

    


      

    • pipe option {end: true} is set in .output()
    • 


    • which has a side-effect that causes the following console log output
    • 


    


    


    Processing : 19.261847354642416% done Processing :
32.365144874807335% done Processing : 48.80978326261429% done Processing : 78.35771917058617%
Processing : 91.49377493455148% done Processing :
99.91264359125745% done An error occurred : Output stream closed

    


    


    Despite that error, it seems the file is generated correctly and it gets uploaded to Amazon S3 fine.

    


    This is the fluent-ffmpeg code :

    


    import {PassThrough} from 'node:stream';
import FFMpeg from 'fluent-ffmpeg';

let PassThroughStream = new PassThrough();

             FFMpeg('/testvideo.mp4')
                    .videoCodec('libx264')
                    .audioCodec('libmp3lame')
                    .size(`640x480`)
                    // Stream output requires manually specifying output formats
                    .format('mp4')
                    .outputOptions('-movflags dash')
                    .on('progress', function (progress) {
                        console.log('Processing: ' + progress.percent + '% done');
                    })
                    .on('error', function (err) {
                        console.log('An error occurred: ' + err.message);
                    })
                    .on('end', function () {
                        console.log('FFMpeg Processing finished!');
                    })
                    .output(PassThroughStream, {end: true})
                    .run();

   // Now upload to S3
    try {
          await s3Upload({
              AWSS3Client: 'mys3client',
              Bucket: 'publicbucket,
              ACL: "public-read",
              ContentType: 'video/mp4',
              Key: 'whoever/whatever.mp4',
              Body: PassThroughStream
           });
    } catch (error) {
                    console.log(`s3Upload error`, error)
                }


    


    If I set the pipe output() option to {end: false} then there is no error from fluent-ffmpeg and I get "Processing: 100% done FFMpeg Processing finished!" as the final console log.

    


    BUT the problem is that the s3Upload() does not do anything. There are no errors. Just no activity.

    


    I feel very uncomfortable letting fluent-ffmpeg end in an error even if the code itself does the job intended. It will also cause testing to fail. What could be the issue ?

    


    The command line code is : ffmpeg -i https:/xxxbucket.s3.amazonaws.com/14555/file-example.mp4 -acodec libmp3lame -vcodec libx264 -filter:v scale=w=trunc(oh*a/2)*2:h=480 -f mp4 -movflags dash pipe:1

    


  • Streaming playlist with browser overlay [closed]

    28 juin 2024, par Tchoune

    Do you have any idea how I can stream a video playlist on twitch (with ffmpeg or another lib) and overlay a web page (with sub twitch alerts for example).

    


    I also need to be aware that my system needs to be multi-user. A user can stream on 1 to n different twitch channels. (multi instance).

    


    For my production, I plan to use linux server without GUI. I've been looking for a solution for 4 months, but I've run out of ideas.

    


    I've already tried xvfb to create a virtual desktop and display a chorimum browser, but it's not effective for production.
I've tried the whole pupeertee thing but it's not usable either.

    


    And my backend server is under nodejs with adonisjs.
I'm currently using ffmpeg to broadcast a video playlist with m3u8 :

    


    startStream(): number {
let parameters = [
  '-nostdin',
  '-re',
  '-f',
  'concat',
  '-safe',
  '0',
  '-vsync',
  'cfr',
  '-i',
  `concat:${app.publicPath(this.timelinePath)}`,
]

let filterComplex = ''

if (this.logo) {
  parameters.push('-i', app.publicPath(this.logo))
  filterComplex += '[1:v]scale=200:-1[logo];[0:v][logo]overlay=W-w-5:5[main];'
} else {
  filterComplex += '[0:v]'
}

if (this.overlay) {
  parameters.push('-i', app.publicPath(this.overlay))
  filterComplex += '[2:v]scale=-1:ih[overlay];[main][overlay]overlay=0:H-h[main];'
}

filterComplex += `[main]drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:textfile=${app.publicPath(this.guestFile)}:reload=1:x=(w-text_w)/2:y=h-text_h-10:fontsize=18:fontcolor=white[main]; [main]drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf:text='%{localtime\\:%X}':x=10:y=h-text_h-10:fontsize=16:fontcolor=white`

parameters.push(
  '-filter_complex',
  filterComplex,
  '-copyts',
  '-pix_fmt',
  'yuv420p',
  '-s',
  '1920x1080',
  '-c:v',
  'libx264',
  '-profile:v',
  'high',
  '-preset',
  'veryfast',
  '-b:v',
  '6000k',
  '-maxrate',
  '7000k',
  '-minrate',
  '5000k',
  '-bufsize',
  '9000k',
  '-g',
  '120',
  '-r',
  '60',
  '-c:a',
  'aac',
  '-f',
  'flv',
  `${this.baseUrl}/${encryption.decrypt(this.streamKey)}`
)

this.instance = spawn('ffmpeg', parameters, {
  detached: true,
  stdio: ['ignore', 'pipe', 'pipe'],
})


    


    I've thought of using Webrtc, but it doesn't seem to meet my needs.

    


    I know that Gstreamer has wpeWebKit or wpesrc to do this, but there's no nodejs wrapper and above all it doesn't take playlist input (m3u8 or txt) into account...

    


    If anyone has any new ideas, I'd be very grateful.

    


  • Transcoding Modern Formats

    17 août 2014

    I’ve noticed that this blog still gets a decent amount of traffic, particularly to some of the older articles about transcoding. Since I’ve been working on a tool in this space recently, I thought I’d write something up in case it helps folks unravel how to think about transcoding these days.

    The tool I’ve been working on is EditReady, a transcoding app for the Mac. But why do you want to transcode in the first place ?

    Dailies

    After a day of shooting, there are a lot of people who need to see the footage from the day. Most of these folks aren’t equipped with editing suites or viewing stations - they want to view footage on their desktop or mobile device. That can be a problem if you’re shooting ProRes or similar.

    Converting ProRes, DNxHD or MPEG2 footage with EditReady to H.264 is fast and easy. With bulk metadata editing and custom file naming, the management of all the files from the set becomes simpler and more trackable.

    One common workflow would be to drop all the footage from a given shot into EditReady. Use the "set metadata for all" command to attach a consistent reel name to all of the clips. Do some quick spot-checks on the footage using the built in player to make sure it’s what you expect. Use the filename builder to tag all the footage with the reel name and the file creation date. Then, select the H.264 preset and hit convert. Now anyone who needs the footage can easily take the proxies with them on the go, without needing special codecs or players, and regardless of whether they’re working on a PC, a Mac, or even a mobile device.

    If your production is being shot in the Log space, you can use the LUT feature in EditReady to give your viewers a more traditional "video levels" daily. Just load a basic Log to Video Levels LUT for the batch, and your converted files will more closely resemble graded footage.

    Mezzanine Formats

    Even though many modern post production tools can work natively with H.264 from a GoPro or iPhone, there are a variety of downsides to that type of workflow. First and foremost is performance. When you’re working with H.264 in an editor or color correction tool, your computer has to constantly work to decompress the H.264 footage. Those are CPU cycles that aren’t being spent generating effects, responding to user interface clicks, or drawing your previews. Even apps that endeavor to support H.264 natively often get bogged down, or have trouble with all of the "flavors" of H.264 that are in use. For example, mixing and matching H.264 from a GoPro with H.264 from a mobile phone often leads to hiccups or instability.

    By using EditReady to batch transcode all of your footage to a format like ProRes or DNxHD, you get great performance throughout your post production pipeline, and more importantly, you get consistent performance. Since you’ll generally be exporting these formats from other parts of your pipeline as well - getting ProRes effects shots for example - you don’t have to worry about mix-and-match problems cropping up late in the production process either.

    Just like with dailies, the ability to apply bulk or custom metadata to your footage during your initial ingest also makes management easier for the rest of your production. It also makes your final output faster - transcoding from H.264 to another format is generally slower than transcoding from a mezzanine format. Nothing takes the fun out of finishing a project like watching an "exporting" bar endlessly creep along.

    Modernization

    The video industry has gone through a lot of digital formats over the last 20 years. As Mac OS X has been upgraded over the years, it’s gotten harder to play some of those old formats. There’s a lot of irreplaceable footage stored in formats like Sorensen Video, Apple Intermediate Codec, or Apple Animation. It’s important that this footage be moved to a modern format like ProRes or H.264 before it becomes totally unplayable by modern computers. Because EditReady contains a robust, flexible backend with legacy support, you can bring this footage in, select a modern format, and click convert. Back when I started this blog, we were mostly talking about DV and HDV, with a bit of Apple Intermediate Codec mixed in. If you’ve still got footage like that around, it’s time to bring it forward !

    Output

    Finally, the powerful H.264 transcoding pipeline in EditReady means you generate beautiful deliverable H.264 more rapidly than ever. Just drop in your final, edited ProRes, DNxHD, or even uncompressed footage and generate a high quality H.264 for delivery. It’s never been this easy !

    See for yourself

    We released a free trial of EditReady so you can give it a shot yourself. Or drop me a line if you have questions.