Recherche avancée

Médias (0)

Mot : - Tags -/médias

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (82)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (12132)

  • How to seamlessly concatenate multiple Opus files together without popping sound ?

    16 février, par Gurdie Derilus

    I have a large PCM file that I've split into N chunks (N being the # of threads), and I encode them in parallel into Opus files with FFmpeg.

    


    Note : All PCM files are 16-bit Little Endian, 2 channels, 48000 sample rate.

    


    I then concatenate the Opus files using FFmpeg's demuxer, but I can hear an audible pop sound between each segment.

    


    Opening this sample file in Audacity reveals the issue :
Notice the introduced pops in opus

    


    I created a simple and short Golang project on Github with a sample PCM file for easy testing. Note, not production code, so obviously not following any best practices here.

    


    #1, I suspected the pops might've been introduced while parallel encoding each PCM file to Opus files. This, however, wasn't the case.
Concatted Opus files vs Separate Opus files image.

    


    #2, using the concat filter works, however it reencodes the files, which is not doable in my case as it's too slow (these files can & do reach up to an hour). I know Opus files are chainable, so I can't imagine why they don't work flawlessly.

    


    #3, I heard that Opus has a 20ms frame size, so I split the file against that frame size, but this made no difference.

    


    chunkSize := largePcmFileStat.Size() / int64(runtime.GOMAXPROCS(0))
chunkSize = int64(roundUpToNearestMultiple(float64(chunkSize), 4))


    


    The entire sample looks like this :

    


    package main

import (
    "context"
    "fmt"
    "io"
    "log"
    "os"
)

func main() {
    // Grab large PCM file
    largePcmFile, err := os.Open("files/full_raw.pcm")
    if err != nil {
        log.Fatalln(err)
    }

    // Split into 2 chunks
    ByteRate := 2
    SampleRate := 48000
    Channels := 2
    Seconds := 20
    chunkSize := Seconds * Channels * SampleRate * ByteRate

    file1, err := encodePcmToOpus(context.TODO(), io.LimitReader(largePcmFile, int64(chunkSize)))
    if err != nil {
        log.Fatalln(err)
    }

    file2, err := encodePcmToOpus(context.TODO(), io.LimitReader(largePcmFile, int64(chunkSize)))
    if err != nil {
        log.Fatalln(err)
    }

    fmt.Println("Check if these play with no defects:", file1)
    fmt.Println("file1:", file1)
    fmt.Println("file2:", file2)
    fmt.Println()

    concatFile, err := concatOpusFiles(context.TODO(), []string{file1, file2})
    if err != nil {
        log.Fatalln(err)
    }

    fmt.Println("concatted file:", concatFile.Name())
}


    


  • Node 18 or Node 20 break ffmpeg (in google cloud functions -> ffprobe was killed with signal SIGSEGV)

    10 janvier 2024, par user20206929

    Please see below, the code is working on node js 16, but not when upgrading to node 18 or 20.

    


    const ffmpeg = require("fluent-ffmpeg");

// Following is inside a .https.onRequest Google Cloud function with enough memory

try {
  const duration = new Promise((resolve, reject) => {
  ffmpeg.ffprobe(videoUrl, async (err, metadata) => {
    if (err) {
      if (res.headersSent) {
        console.error("Response already sent");
        return;
      } else {
        console.log("Metadata:", metadata);
        console.log("err: " + err);
        res.status(400).send("Error getting video metadata");
        return;
      }
    }
  const duration = metadata.format.duration;
  console.log("video duration in second: " + duration);
  resolve(duration);
  });
});
  videoDuration = await duration;
} catch (err) {
  console.log(err);
  throw err;
}


    


    When upgrading to node 18/20 (No other change than upgrading node), the error "ffprobe not found" appears.

    


    But setting the path manually using ffmpeg.setFfprobePath(ffprobePath) ;
trigger the error : Error : ffprobe was killed with signal SIGSEGV

    


    So it seem its a permissions issue.

    


    However, I tried a lot of different solutions, none of them made this work.
For instance i tried to download manually the ffprobe from the official website https://ffbinaries.com/downloads. Then manually add it to the code.

    


    I tried to use https://www.npmjs.com/package/@ffprobe-installer/ffprobe or others package like https://www.npmjs.com/package/ffprobe-static

    


    I also tried to download the ffprobe file to the temporary folder of google cloud, and change the permission of this folder.

    


    All of those was doing the same error.

    


    None of what i could think of made any difference.

    


    Please help because i need to update node 16 to 18 or 20 before google remove node 16 on january 31 2024 and for now i don't see a solution.

    


    I also looked for other solution to get this duration from a video file url, but using ffmpeg seem to be the only one that should work out of the box. As it is working on node 16.

    


    Thank you,

    


    UPDATE - 11/26/2023

    


    GCP Functions NodeJS 16 runtime uses Ubuntu 18.04 with FFMpeg installed.
NodeJS 18/20 use Ubuntu 22.04, and Google decided not to include FFMpeg.

    


    https://cloud.google.com/functions/docs/runtime-support#node.js
https://cloud.google.com/functions/docs/reference/system-packages

    


    No workaround or solutions found as of now

    


    UPDATE - 01/10/2024

    


    Google added back ffmpeg to latest version, this is working as before now.

    


  • FFmpeg : What re-encoding settings can be used to achieve results similar to Google Drive's video processing ?

    4 août 2023, par Mycroft_47

    Context :

    


    I have a large collection of videos recorded by my phone's camera, which is taking up a significant amount of space. Recently, I noticed that when I uploaded a video to Google Drive and then downloaded it again using IDM (by clicking on the pop-up that IDM displays when it detects something that can be downloaded here's what i mean), the downloaded video retained the same visual quality but occupied much less space. Upon further research, I discovered that Google re-encodes uploaded videos using H.264 video encoding, and I believe I can achieve similar compression using FFmpeg.

    


    Problem :

    


    Despite experimenting with various FFmpeg commands, I haven't been able to replicate Google Drive's compression. Every attempt using -codec:v libx264 option alone resulted in videos larger than the original files.

    


    While adjusting the -crf parameter to a higher value and opting for a faster -preset option did yield smaller file sizes, it unfortunately came at the cost of a noticeable degradation in visual quality and the appearance of some visible artifacts in the video.

    


    Google Drive's processing, on the other hand, strikes a commendable balance, achieving a satisfactory file size without compromising visual clarity, (I should note that upon zooming in on this video, I observed some minor blurring, but it was acceptable to me).

    


    Note :

    


    I'm aware that using the H.265 video encoder instead of H.264 may give better results. However, to ensure fairness and avoid any potential bias, I think the optimal approach is first to find the best command using the H.264 video encoder. Once identified, I can then replace -codec:v libx264 with -codec:v libx265. This approach will ensure that the chosen command is really the best that FFMPEG can achieve, and that it is not solely influenced by the superior performance of H.265 when used from the outset.

    


    Here's the FFMPEG command I am currently using :

    


    ffmpeg -hide_banner -loglevel verbose ^
    -i input.mp4 ^
    -codec:v libx264 ^
    -crf 36 -preset ultrafast ^
    -codec:a libopus -b:a 112k ^
    -movflags use_metadata_tags+faststart -map_metadata 0 ^
    output.mp4


    


    





    


    


    


    


    


    


    


    



    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    


    Video file Size (bytes) Bit rate (bps) Encoder FFPROB - JSON
    Original (named 'raw 1.mp4') 31,666,777 10,314,710  !!! link
    Without crf 36,251,852 11,805,216 Lavf60.3.100 link
    With crf 10,179,113 3,314,772 Lavf60.3.100 link
    Gdrive 6,726,189 2,190,342 Google link

    


    


    Those files can be found here.

    


    Update :

    


    I continued my experiments with the video "raw_1.mp4" and found some interesting results that resemble those shown in this blog post, (I recommend consulting this answer).

    


    In the following figure, I observed that using the -preset set to veryfast provided the most advantageous results, striking the optimal balance between compression ratio and compression time, (Note that a negative percentage in the compression variable indicates an increase in file size after processing) :
enter image description here

    


    In this figure, I used the H.264 encoder and compared the compression ratio of different outputted files resulting from seven different values of the -crf parameter (CRF values used : 25, 27, 29, 31, 33, 35, 37),
enter image description here

    


    For this figure, I've switched the encoder to H.265 while maintaining the same CRF values used in the previous figure :
enter image description here

    


    Based on these results, the -preset veryfast and a -crf value of 31 are my current preferred settings for FFmpeg, until they are proven to be suboptimal choices.
As a result, the FFmpeg command I'll use is as follows :

    


    ffmpeg -hide_banner -loglevel verbose ^
    -i input.mp4 ^
    -codec:v libx264 ^
    -crf 31 -preset veryfast ^
    -codec:a libopus -b:a 112k ^
    -movflags use_metadata_tags+faststart -map_metadata 0 ^
    output.mp4


    


    Note that these choices are based solely on the compression results obtained so far, and they do not take into account the visual quality of the outputted files.