Recherche avancée

Médias (91)

Autres articles (76)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

  • Use, discuss, criticize

    13 avril 2011, par

    Talk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
    The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
    A discussion list is available for all exchanges between users.

Sur d’autres sites (11946)

  • ffmpeg / Audacity channel splitting differences

    14 mars 2018, par Adrian Chromenko

    So I’m working on a speech to text project using Python and Google Cloud Services (for phone calls). The mp3s I receive have one voice playing in the left speaker, the other voice in the right speaker.

    So during testing, I manually split the original mp3 file into two WAV files (one for each channel, converted to mono). I did this splitting through Audacity. The accuracy was about 80-90%, which was perfect for my purposes.

    However, once I tried to automate the splitting using ffmpeg (more specifically : ffmpeg -i input_filename.mp3 -map_channel 0.0.0 left.wav -map_channel 0.0.1 right.wav), the accuracy dropped drastically.

    I’ve been experimenting for about a week now but I can’t get the accuracy up. For what it’s worth, the audio files sound identical to the human ear. I found that when I increase the volume of the output files, the accuracy gets better, but never as good as when I did the splitting with Audacity.

    I guess what I’m trying to ask is, what does Audacity do differently ?

    here are the sox -n stat results for each file :

    **Split with ffmpeg( 20-30% accuracy) : **

    Samples read:           1690560
    Length (seconds):    211.320000
    Scaled by:         2147483647.0
    Maximum amplitude:     0.433350
    Minimum amplitude:    -0.475739
    Midline amplitude:    -0.021194
    Mean    norm:          0.014808
    Mean    amplitude:    -0.000037
    RMS     amplitude:     0.028947
    Maximum delta:         0.333557
    Minimum delta:         0.000000
    Mean    delta:         0.009001
    RMS     delta:         0.017949
    Rough   frequency:          789
    Volume adjustment:        2.102

    Split with Audacity : (80-90% accuracy)

    Samples read:           1689984
    Length (seconds):    211.248000
    Scaled by:         2147483647.0
    Maximum amplitude:     0.217194
    Minimum amplitude:    -0.238373
    Midline amplitude:    -0.010590
    Mean    norm:          0.007423
    Mean    amplitude:    -0.000018
    RMS     amplitude:     0.014510
    Maximum delta:         0.167175
    Minimum delta:         0.000000
    Mean    delta:         0.004515
    RMS     delta:         0.008998
    Rough   frequency:          789
    Volume adjustment:        4.195

    original mp3 :

    Samples read:           3379968
    Length (seconds):    211.248000
    Scaled by:         2147483647.0
    Maximum amplitude:     1.000000
    Minimum amplitude:    -1.000000
    Midline amplitude:    -0.000000
    Mean    norm:          0.014124
    Mean    amplitude:    -0.000030
    RMS     amplitude:     0.047924
    Maximum delta:         1.015332
    Minimum delta:         0.000000
    Mean    delta:         0.027046
    RMS     delta:         0.067775
    Rough   frequency:         1800
    Volume adjustment:        1.000

    One thing that stands out to me is that the duration isn’t the same. Also the amplitudes. Can I instruct ffmpeg what the duration is when it is doing the splitting ? And can I change all the amplitudes to match the audacity file ? I’m not sure what to do to get to the 80% accuracy rate, but increasing volume seems to be the most promising solution so far.

    Any help would be greatly appreciated. I don’t have to use ffmpeg, but it seems like my only option, as Audacity isn’t scriptable.

  • FFMpeg\Format\video\X264 not found

    21 septembre 2022, par Despossivel

    Can anyone tell me why I get

    


    


    Error : Class "FFMpeg\Format\video\X264" not found in"

    


    


    even with the dependency installed and binary installed.

    


    Below is my implementation.

    


    I have the dependency in vendor and the binary in the container installed

    


    

<?php

namespace App\Services;
 
use App\Http\Traits\UploadFiles;
use App\Models\OtherVideoFormat;
use ProtoneMedia\LaravelFFMpeg\Support\FFMpeg;

class VideoServiceSecond
{
    use UploadFiles, SetPath;

    protected $otherVideoFormat, $data;

    public function __construct(array $data, $method = 'create')
    {
        $this->otherVideoFormat = app(OtherVideoFormat::class);
        $this->data = $data;

        if ($method == 'create')
            $this->convertFormatVideoFirst();
        else
            $this->convertFormatVideoFirstUpdate();
    }

    public function convertFormatVideoFirst()
    {
        FFMpeg::openUrl($this->data['url_signed'])
            ->export()
            ->toDisk('public')
            ->inFormat(new \FFMpeg\Format\video\X264)
            ->resize(config('dimensions-videos.second-width'), config('dimensions-videos.second-height'))
            ->save('video-resize-' . config('dimensions-videos.second-width') . 'x'
                . config('dimensions-videos.second-height') . config('files-extensions.video'));

        $response = $this->uploadFileConvert(
            Storage::disk('public')
                ->get("video-resize-" . config('dimensions-videos.second-width') . 'x'
                    . config('dimensions-videos.second-height') . config('files-extensions.video')),
            "video-resize-" . config('dimensions-videos.second-width') . 'x'
                . config('dimensions-videos.second-height') . '-' . Str::uuid() . config('files-extensions.video')
        );

        $this->data['url'] = $response;
        $this->data['url_signed'] = $this->getAuthorizationToDownloadFile($this->data['url']);
        $this->data['created_at'] = Carbon::now()->toDateString();
        $this->data['updated_at'] = Carbon::now()->toDateString();
        $this->data['video_id'] = $this->data['video_id'];
        $this->data['format'] = config('dimensions-videos.second-width') . 'x'
            . config('dimensions-videos.second-height');

        FFMpeg::openUrl($this->data['url_signed'])
            ->getFrameFromSeconds(config('time-capture-thumb.seconds'))
            ->export()
            ->toDisk('public')
            ->save("video-thumb-" . config('dimensions-thumb.second-width') . 'x'
                . config('dimensions-thumb.second-height') . config('files-extensions.thumb'));

        $url_file = $this->uploadPublicFileCompress(
            Storage::disk('public')
                ->get("video-thumb-" . config('dimensions-thumb.second-width') . 'x'
                    . config('dimensions-thumb.second-height') . config('files-extensions.thumb')),
            "video-thumb-" . config('dimensions-thumb.second-width') . 'x'
                . config('dimensions-thumb.second-height') . '-' . Str::uuid() . config('files-extensions.thumb')
        );

        $this->removeImagePublicDisk("video-thumb-" . config('dimensions-thumb.second-width') . 'x'
            . config('dimensions-thumb.second-height') . config('files-extensions.thumb'));
        $this->removeImagePublicDisk("video-resize-" . config('dimensions-videos.second-width') . 'x'
            . config('dimensions-videos.second-height') . config('files-extensions.video'));

        $this->data['thumbnail_url'] = $url_file;

        $this->otherVideoFormat->create($this->data);
    }


?>


    


    I've tried several options, and I can't find the reason for the problem

    


  • nginx push rtmp stream to ffmpeg

    1er janvier 2021, par vicmortelmans

    On my Raspberry Pi with camera module, I try to setup a web-based streaming platform. I want to preview the stream in my browser and use CGI scripts to start/stop broadcasting to youtube (,...).

    


    This is how I did the streaming setup so far :

    


    enter image description here

    


    Nginx puts up an RTMP application webcam. This is where I'll send the camera and audio stream usig ffmpeg. It publishes the stream as HLS for the web preview. It's also pushing the stream to another application source. That's where I want to (occasionally) hook up another ffmpeg process for broadcasting to youtube (,...) RTMP end points.

    


    I initiate the stream using ffmpeg like this :

    


    ffmpeg -loglevel debug -f v4l2 -framerate 15 -video_size 1280x720 -input_format h264 -i /dev/video0 -f alsa -i hw:2 -codec:v copy -g 15 -codec:a aac -b:a 128k -ar 44100 -strict experimental -f flv "rtmp://localhost:1935/webcam/hhart"


    


    So far everything works fine. I can preview the HLS stream using a video.js viewer on my website (also served by nginx).

    


    Now I want to start another ffmpeg process for broadcasting to my youtube channel, hooked up to the source application like this :

    


    ffmpeg -loglevel debug -f flv -listen 1 -i rtmp://localhost:1935/source/hhart -c copy &#x27;rtmp://a.rtmp.youtube.com/live2/<key>&#x27;&#xA;</key>

    &#xA;

    (in the final setup, launching and killing this process will be done via CGI scripts)

    &#xA;

    This is what ffmpeg returns :

    &#xA;

    Opening an input file: rtmp://localhost:1935/source/hhart.&#xA;[flv @ 0x2032480] Opening &#x27;rtmp://localhost:1935/source/hhart&#x27; for reading&#xA;[rtmp @ 0x2032a10] No default whitelist set&#xA;[tcp @ 0x20330f0] No default whitelist set&#xA;

    &#xA;

    and then... nothing happens. There's no stream coming in at Youtube studio, but there are no error messages either.

    &#xA;

    Some other tests I did :

    &#xA;

      &#xA;
    1. from the webcam application, push directly to the Youtube RTMP => that works ! (but it's not what I want, because I want the HLS stream to be online always, but the broadcasting only when I'm going live.)

      &#xA;

    2. &#xA;

    3. from VLC display the stream at rtmp://localhost:1935/source/hhart => similar to ffmpeg, there's no error message, the progress bar keeps loading.

      &#xA;

    4. &#xA;

    &#xA;

    So I have the impression that there is something going on, but there's no actual data transmitted.

    &#xA;

    RTMP section in nginx.conf :

    &#xA;

    rtmp {                                                                     &#xA;    server {                                                               &#xA;        listen 1935;                                                       &#xA;        chunk_size 4000;                                                   &#xA;                                                                           &#xA;        application webcam {                                               &#xA;            live on;                                                       &#xA;            hls on;                                                        &#xA;            hls_path /Services/Webcam/HLSStream;                           &#xA;            hls_fragment 3;                                                &#xA;            hls_playlist_length 60;                                        &#xA;            #deny play all;                                                 &#xA;            push rtmp://localhost:1935/source/;                            &#xA;            #push rtmp://a.rtmp.youtube.com/live2/<key>;&#xA;        }                                                                  &#xA;                                                                           &#xA;        application source {                                               &#xA;            live on;                                                       &#xA;            record off;                                                    &#xA;        }                                                                  &#xA;    }                                                                      &#xA;}                                                                          &#xA;</key>

    &#xA;

    Of course, I may be totally on the wrong track, so any suggestions how I can realize my requirements in a better way, are welcome !

    &#xA;