Recherche avancée

Médias (91)

Autres articles (111)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs

    12 avril 2011, par

    La manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
    Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

Sur d’autres sites (9460)

  • avcodec/h26[45]_metadata_bsf : Use separate contexts for reading/writing

    6 juillet 2020, par Andreas Rheinhardt
    avcodec/h26[45]_metadata_bsf : Use separate contexts for reading/writing
    

    Currently, both bsfs used the same CodedBitstreamContext for reading and
    writing ; as a consequence, the state of the writer's context at the
    beginning of writing a fragment is exactly the state of the reader after
    having read the fragment ; in particular, the writer might not have
    encountered one of its active parameter sets yet.

    This is not nice and may lead to invalid output even when the input
    is completely spec-compliant : Think of an access unit containing
    a primary coded picture referencing a PPS with id id (that is known from
    an earlier access unit/from extradata), then a new version of the PPS
    with id id and then a redundant coded picture that is also referencing
    the PPS with id id. This is spec-compliant, as the standard allows to
    overwrite a PPS with a different PPS in between coded pictures and not
    only at the beginning of an access unit. In this scenario, the reader
    would read the primary coded picture with the old PPS and the redundant
    coded picture with the new PPS (as it should) ; yet the writer would
    write both with the new PPS as extradata which might lead to errors or
    to invalid data being output without any error (e.g. if the two PPS
    differed in redundant_pic_cnt_present_flag).

    The above scenario does not directly translate to HEVC as long as one
    restricts oneself to input with nuh_layer_id == 0 only (as cbs_h265
    does : it currently strips away any NAL unit with nuh_layer_id > 0 when
    decomposing) ; if one doesn't the same issue as above can happen.

    If one also allowed input packets to contain more than one access unit,
    issues like the above can happen even without redundant coded
    pictures/multiple layers.

    Therefore this commit uses separate contexts for reader and writer.

    Reviewed-by : Mark Thompson <sw@jkqxz.net>
    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavcodec/h264_metadata_bsf.c
    • [DH] libavcodec/h265_metadata_bsf.c
  • Splitting m4a into multiple files with ffmpeg

    16 juillet 2020, par ARuiz

    I have an audio file (.m4a) and want to split it into smaller pieces using ffmpeg. All answers I have seen do something along one of the following two possibilities :

    &#xA;

      &#xA;
    1. ffmpeg -i inputFile -map 0 -f segment -segment_time 60 -c copy "$output%03d.m4a
    2. &#xA;

    &#xA;

    or

    &#xA;

      &#xA;
    1. ffmpeg -i inputFile -acodec copy -ss start_time -to end_time outputFile
    2. &#xA;

    &#xA;

    With 1. the first file is fine. From the second file on, I just get a minute of silence in quicktime, and in VLC the file plays but the timing is odd : for example the second file should have second 0 equals to second 60 in the original file. However in vlc it starts playing on second 60 and goes on to second 120.

    &#xA;

    With 2. I have to set start times and end for each file, unfortunately I notice a small jump when I play one after the other, so it seems as if some miliseconds are lost.

    &#xA;

    There are definitely a few old questions asked around this, but none of them actually helped me with this.

    &#xA;

  • FFMPEG — Error when trying to concat multiple files with and without audio

    7 août 2020, par Philban

    Ok so thanks to a fellow user I have the following FFMPEG command that concats 4 videos together. Now if I use this command with 4 video files that all have audio everything works ! However, if 1 or more of the videos do not have sound I get a "matches no streams" error.

    &#xA;

    Can someone spot whats wrong here please ?

    &#xA;

    Video Input 1 - No Audio so adding the anullsrc&#xA;Video 2 - Has Audio&#xA;Video 3 - No Audio&#xA;Video 4 - Has Audio

    &#xA;

    ffmpeg -i noSound1.mp4 -i story1.mp4 -i noSound2.mp4 -i story2.mp4 -t 0.1 -f lavfi -i anullsrc=channel_layout=stereo -filter_complex &#xA;"[0:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v0]; &#xA; [1:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v1]; &#xA; [2:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v2]; &#xA; [3:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v3]; &#xA; [0:a]aformat=sample_rates=48000:channel_layouts=stereo[a0]; &#xA; [1:a]aformat=sample_rates=48000:channel_layouts=stereo[a1]; &#xA; [2:a]aformat=sample_rates=48000:channel_layouts=stereo[a2]; &#xA; [3:a]aformat=sample_rates=48000:channel_layouts=stereo[a3]; &#xA; [v0][a0][v1][a1][v2][a2][v3][a3]concat=n=4:v=1:a=1[v][a]" &#xA;-map "[v]" -map "[a]" -c:v libx264 -c:a aac -movflags &#x2B;faststart testOutput.mp4&#xA;

    &#xA;

    Here is the error :

    &#xA;

    Stream specifier &#x27;:a&#x27; in filtergraph description [0:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v0];  [1:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v1];  [2:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v2];  [3:v]scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,fps=30,format=yuv420p[v3];  [0:a]aformat=sample_rates=48000:channel_layouts=stereo[a0];  [1:a]aformat=sample_rates=48000:channel_layouts=stereo[a1];  [2:a]aformat=sample_rates=48000:channel_layouts=stereo[a2];  [3:a]aformat=sample_rates=48000:channel_layouts=stereo[a3];  [v0][a0][v1][a1][v2][a2][v3][a3]concat=n=4:v=1:a=1[v][a] matches no streams.&#xA;

    &#xA;

    Now a slightly separate question. If I have N input videos where i do not know if they have sound or not, is there a way I can have a loop that does the above without having massive amounts of command lines ?&#xA;Many thanks !

    &#xA;