Recherche avancée

Médias (91)

Autres articles (89)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Ajouter notes et légendes aux images

    7 février 2011, par

    Pour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
    Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
    Modification lors de l’ajout d’un média
    Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)

Sur d’autres sites (8756)

  • How to merge segmented webvtt subtitle files and output a single file ?

    15 février, par Dobbelina

    How to merge a segmented webvtt subtitle file and output a single file ?,
m3u8 looks like this example :

    



    #EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-TARGETDURATION:4
#USP-X-TIMESTAMP-MAP:MPEGTS=900000,LOCAL=1970-01-01T00:00:00Z
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-1.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-2.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-3.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-4.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-5.webvtt
#EXTINF:4, no desc
0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-6.webvtt
#EXT-X-ENDLIST


    



    I noticed that each segment is not synchronized/cued against total playing time, but against the individual ts segments.
If ffmpeg could be used to do this, what magic input do i need to give it ?

    



    A single correctly cued vtt or srt file is what i want.

    



    I have a great appetite and don't like chunks, lol !

    



    Thanks for any replies you lovely people !

    




    



    With this i get a merged vtt file, but the cues are all wrong :

    



    ffmpeg -i "https://cmoreseusphlsvod60.akamaized.net/vod/bea44/0ghzi1b2cz5(11792107_ISMUSP).ism/0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000.m3u8" -f segment -segment_time 4 -segment_format webvtt -scodec copy out-%05d.vtt


    



    Each segment is not synchronized/cued against total playing time, but against the individual ts segments.
Example output of above command :

    



    WEBVTT

00:00.000 --> 00:03.040
Du har aktier i ett företag
som saknar framtid.

00:00.000 --> 00:03.280
De vill ha aktierna.
Du känner dem inte, Olga.

00:00.000 --> 00:01.720
De som får Kastrups aktier vinner.


    



    Cues all start like this which isn't very helpfull : 00:00.000

    



    Some segments contains no cues, like segment 15 for example :
https://cmoreseusphlsvod60.akamaized.net/vod/bea44/0ghzi1b2cz5(11792107_ISMUSP).ism/0ghzi1b2cz5(11792107_ISMUSP)-textstream_swe=2000-15.webvtt

    



    


    "A WebVTT Segment MAY contain no cues ; this indicates that no
 subtitles are to be displayed during that period."

    


    


  • Can you put the result of a blackdetect filter in a textfile using ffmpeg ?

    18 novembre 2020, par Gijserman

    I'm testing out the "blackdetect" filter in ffmpeg. I want to have the times when the video is black to be read by a script (like actionscript or javascript). I tried :

    



    ffmpeg -i video1.mp4 -vf "blackdetect=d=2:pix_th=0.00" -an -f null -


    



    And I get a nice result in the ffmpeg log :

    



    ffmpeg version N-55644-g68b63a3 Copyright (c) 2000-2013 the FFmpeg developers
  built on Aug 19 2013 20:32:00 with gcc 4.7.3 (GCC)
  configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-av
isynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enab
le-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetyp
e --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --ena
ble-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-l
ibopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libsp
eex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-
amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --
enable-libxvid --enable-zlib
  libavutil      52. 42.100 / 52. 42.100
  libavcodec     55. 28.100 / 55. 28.100
  libavformat    55. 13.103 / 55. 13.103
  libavdevice    55.  3.100 / 55.  3.100
  libavfilter     3. 82.100 /  3. 82.100
  libswscale      2.  5.100 /  2.  5.100
  libswresample   0. 17.103 /  0. 17.103
  libpostproc    52.  3.100 / 52.  3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video1.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    creation_time   : 1970-01-01 00:00:00
    encoder         : Lavf53.13.0
  Duration: 00:02:01.54, start: 0.000000, bitrate: 275 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 768x432 [
SAR 1:1 DAR 16:9], 211 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc
    Metadata:
      creation_time   : 1970-01-01 00:00:00
      handler_name    : VideoHandler
    Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 59
 kb/s
    Metadata:
      creation_time   : 1970-01-01 00:00:00
      handler_name    : SoundHandler
Output #0, null, to 'pipe:':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf55.13.103
    Stream #0:0(eng): Video: rawvideo (I420 / 0x30323449), yuv420p, 768x432 [SAR
 1:1 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 25 tbc
    Metadata:
      creation_time   : 1970-01-01 00:00:00
      handler_name    : VideoHandler
Stream mapping:
  Stream #0:0 -> #0:0 (h264 -> rawvideo)
Press [q] to stop, [?] for help
[null @ 00000000003279a0] Encoder did not produce proper pts, making some up.
[blackdetect @ 0000000004d5e800] black_start:0 black_end:17.08 black_duration:17
.08
[blackdetect @ 0000000004d5e800] black_start:62.32 black_end:121.48 black_durati
on:59.16
frame= 3038 fps=2317 q=0.0 Lsize=N/A time=00:02:01.52 bitrate=N/A
video:285kB audio:0kB subtitle:0 global headers:0kB muxing overhead -100.007543%


    



    And I'm particularly interested in this part :

    



    [blackdetect @ 0000000004e2e340] black_start:0 black_end:17.08 black_duration:17.08
[blackdetect @ 0000000004e2e340] black_start:62.32 black_end:121.48 black_duration:59.16


    



    So my question :

    



      

    1. Is there a way to only take the blackdetect filter output and put it in a .txt file ?
    2. 


    3. And if this is possible, is there a way to do this in a statement with multiple video inputs ? Like in this example
    4. 


    




    



    example :

    



    ffmpeg -f concat -i mylist.txt -c copy concat.mp4


    



    Where mylist.txt is a list of videos :

    



    file 'video1.mp4'
file 'video2.mp4'
file 'video3.mp4'
file 'video4.mp4'


    




    



    Basically what I want to have is one or more text files containing information about the black frames in every video in this list to be used by another program

    


  • ffmpeg trim mp3 - determine precisely the start and end times of section to be trimmed

    4 février 2019, par Ahmed Khalil

    I have a long mp3 track of an audio book (more than 9 hours long) that I would like to trim using ffmpeg.

    The sample code below is used to trim an mp3 section by providing the start and end times. However, when I determine the start and end times, then checking the output file, it’s not as precisely as I want, sometimes several minutes ahead/before the desired point.

    import subprocess
    file = r'audio book.mp3'
    track_name = "trimmed section"
    output = r'D:\{0}'.format(track_name)
    start = '01:26:04'
    end = '01:33:17'

    d = subprocess.getoutput('ffmpeg -i "{0}" -ss {1} -to {2} -c copy {3}.mp3"'
                        .format(file, start, end, output))

    print(d)

    Is there a way to determine with accuracy the real start and end time of an mp3 audio track, to be given afterwards as inputs to the code...to trim the desired sections all at once, without the need to adjust/fine-tune the start and end time manually ??