Recherche avancée

Médias (1)

Mot : - Tags -/artwork

Autres articles (104)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

Sur d’autres sites (10297)

  • How to extract orientation information from videos ?

    23 décembre 2016, par Sid

    After surfing through tons of documentation on the web it seems that the iPhone always shoots the video at a 480x360 aspect ratio and applies a transformation matrix on the video track. (480x360 may change but its always the same for a given device)

    Here is a way of modifying the ffmpeg source within a iOS project and accessing the matrix http://www.seqoy.com/correct-orientation-for-iphone-recorded-movies-with-ffmpeg/

    Here is a cleaner way of finding the transformation matrix in iOS-4
    how to detect (iphone sdk) if a video file was recorded in portrait orientation, or landscape

    How can the orientation of the video be extracted in either of the options below -

    - iOS 3.2

    - ffmpeg (through the command line server side)

    - ruby

    Any help will be appreciated.

  • How to add subtitles using FFmpeg-kit ?

    17 novembre 2024, par Mohammed Bekele

    I'm running a Flutter app with Ffmpeg-kit package to burn a subtitle on a video. I have a words list with the timings and map that to generate an srt and ass file, but when executing this it didn't work.

    


    Firstly, here is how I generated the Ass file.

    


    Future<string> _createAssFile() async {&#xA;    String filePath = await getAssOutputFilePath();&#xA;&#xA;    final file = File(filePath);&#xA;    final buffer = StringBuffer();&#xA;&#xA;    // Write ASS headers&#xA;    buffer.writeln(&#x27;[Script Info]&#x27;);&#xA;    buffer.writeln(&#x27;Title: Generated ASS&#x27;);&#xA;    buffer.writeln(&#x27;ScriptType: v4.00&#x2B;&#x27;);&#xA;    buffer.writeln(&#x27;Collisions: Normal&#x27;);&#xA;    buffer.writeln(&#x27;PlayDepth: 0&#x27;);&#xA;    buffer.writeln(&#x27;Timer: 100,0000&#x27;);&#xA;    buffer.writeln(&#x27;[V4&#x2B; Styles]&#x27;);&#xA;    buffer.writeln(&#xA;        &#x27;Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding&#x27;);&#xA;    buffer.writeln(&#xA;        &#x27;Style: Default,Arial,40,&amp;H00FFFFFF,&amp;H000000FF,&amp;H00000000,&amp;H80000000,1,1,1,1,100,100,0,0,1,1,1,2,10,10,10,1&#x27;);&#xA;    buffer.writeln(&#x27;[Events]&#x27;);&#xA;    buffer.writeln(&#xA;        &#x27;Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text&#x27;);&#xA;&#xA;    // Write events (subtitles)&#xA;    for (int i = 0; i &lt; widget.words.length; i&#x2B;&#x2B;) {&#xA;      final word = widget.words[i];&#xA;      final startTime = _formatAssTime(word[&#x27;startTime&#x27;].toDouble());&#xA;      final endTime = _formatAssTime(word[&#x27;endTime&#x27;].toDouble());&#xA;      final text = word[&#x27;word&#x27;];&#xA;&#xA;      buffer.writeln(&#x27;Dialogue: 0,$startTime,$endTime,Default,,0,0,0,,$text&#x27;);&#xA;    }&#xA;&#xA;    await file.writeAsString(buffer.toString());&#xA;    return filePath;&#xA;  }&#xA;&#xA;  String _formatAssTime(double seconds) {&#xA;    final int hours = seconds ~/ 3600;&#xA;    final int minutes = ((seconds % 3600) ~/ 60);&#xA;    final int secs = (seconds % 60).toInt();&#xA;    final int millis = ((seconds - secs) * 1000).toInt() % 1000;&#xA;&#xA;    return &#x27;${hours.toString().padLeft(1, &#x27;0&#x27;)}:${minutes.toString().padLeft(2, &#x27;0&#x27;)}:${secs.toString().padLeft(2, &#x27;0&#x27;)}.${(millis ~/ 10).toString().padLeft(2, &#x27;0&#x27;)}&#x27;;&#xA;  }&#xA;</string>

    &#xA;

    Then I used this command which was the official way of adding ass file to a video.

    &#xA;

      String newCmd = "-i $videoPath -vf ass=$assFilePath -c:a copy $_outputPath";&#xA;

    &#xA;

    Yet when executing this command it did not work. However I changed it to this command

    &#xA;

    String newCmd = "-i $videoPath -i $assFilePath $_outputPath";&#xA;

    &#xA;

    Well, that code works but the styling is not being applied. So I tried yet another command to filter and position the ass file.

    &#xA;

    String newCmd = "-i $videoPath -i $assFilePath -filter_complex \"[0:v][1:s]ass=\\an5:text=&#x27;%{event.text}&#x27;,scale=iw*0.8:ih*0.8,setdar=16/9[outv]\" -map [outv] -c:a copy $_outputPath";&#xA;

    &#xA;

    This made it even worse, because the app crashed when I ran this. Also there is a log for this command before crashing

    &#xA;

    F/libc    (19624): Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x19 in tid 21314 (pool-4-thread-7), pid 19624 (ple.caption_app)&#xA;*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***&#xA;Build fingerprint: &#x27;samsung/a15nsxx/a15:14/UP1A.231005.007/A155FXXU1AWKA:user/release-keys&#x27;&#xA;Revision: &#x27;5&#x27;&#xA;ABI: &#x27;arm64&#x27;&#xA;Processor: &#x27;7&#x27;&#xA;Timestamp: 2024-07-10 19:27:40.812476860&#x2B;0300&#xA;Process uptime: 1746s&#xA;Cmdline: com.example.caption_app&#xA;pid: 19624, tid: 21314, name: pool-4-thread-7  >>> com.example.caption_app &lt;&lt;&lt;&#xA;uid: 10377&#xA;tagged_addr_ctrl: 0000000000000001 (PR_TAGGED_ADDR_ENABLE)&#xA;signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0000000000000019&#xA;Cause: null pointer dereference&#xA;    x0  0000000000000001  x1  b400006ec98d8660  x2  0000000000000000  x3  0000000000000002&#xA;    x4  0000006f62d1ea12  x5  b400006e04f745ea  x6  352f35372f64352f  x7  7f7f7f7f7f7f7f7f&#xA;    x8  632ace36577a905e  x9  632ace36577a905e  x10 0000006e191378c0  x11 0000000000000001&#xA;    x12 0000000000000001  x13 0000000000000000  x14 0000000000000004  x15 0000000000000008&#xA;    x16 0000006e27ffa358  x17 0000006e27dbfb00  x18 0000006e00d48000  x19 0000006f62d1f0e8&#xA;    x20 0000006f62d24000  x21 0000006e27ffc5d0  x22 0000006e27ffc5f0  x23 0000000000000000&#xA;    x24 b400006f3f089248  x25 0000006f62d1f220  x26 b400006f3f057b20  x27 0000000000000002&#xA;    x28 0000000000000088  x29 0000006f62d1f100&#xA;    lr  0000006e27dbfb0c  sp  0000006f62d1f0a0  pc  0000006e27dbfb10  pst 0000000060001000&#xA;1 total frames&#xA;backtrace:&#xA;      #00 pc 0000000000121b10  /data/app/~~oWqjrGA2indQhuEw6u_J2A==/com.example.caption_app-u2Ofk54FVtMc5D-i3SLC6g==/base.apk!libavfilter.so (offset 0x10a46000) (avfilter_inout_free&#x2B;16)    &#xA;Lost connection to device.&#xA;

    &#xA;

    I tried the normal command in my local machine on Windows, and this is what I used

    &#xA;

    ffmpeg -i F:\ffmpeg\video.avi -vf subtitles=&#x27;F\:\\ffmpeg\\caption.srt&#x27;:force_style=&#x27;Fontsize=24&#x27; F:\ffmpeg\new.mp4&#xA;

    &#xA;

    As you can see above when subtitles are used it had an extra backslashes for each path and a pre backslash after the partition letter. But I don't know if this does apply for android devices. How can I make sense of this ?

    &#xA;

  • Basic Video Palette Conversion

    20 août 2011, par Multimedia Mike — General, Python

    How do you take a 24-bit RGB image and convert it to an 8-bit paletted image for the purpose of compression using a codec that requires 8-bit input images ? Seems simple enough and that’s what I’m tackling in this post.

    Ask FFmpeg/Libav To Do It
    Ideally, FFmpeg / Libav should be able to handle this automatically. Indeed, FFmpeg used to be able to, at least at the time I wrote this post about ZMBV and was unhappy with FFmpeg’s default results. Somewhere along the line, FFmpeg and Libav lost the ability to do this. I suspect it got removed during some swscale refactoring.

    Still, there’s no telling if the old system would have computed palettes correctly for QuickTime files.

    Distance Approach
    When I started writing my SMC video encoder, I needed to convert RGB (from PNG files) to PAL8 colorspace. The path of least resistance was to match the pixels in the input image to the default 256-color palette that QuickTime assumes (and is hardcoded into FFmpeg/Libav).

    How to perform the matching ? Find the palette entry that is closest to a given input pixel, where "closest" is the minimum distance as computed by the usual distance formula (square root of the sum of the squares of the diffs of all the components).



    That means for each pixel in an image, check the pixel against 256 palette entries (early termination is possible if an acceptable threshold is met). As you might imagine, this can be a bit time-consuming. I wondered about a faster approach...

    Lookup Table
    I think this is the approach that FFmpeg used to use, but I went and derived it for myself after studying the default QuickTime palette table. There’s a pattern there— all of the RGB entries are comprised of combinations of 6 values — 0x00, 0x33, 0x66, 0x99, 0xCC, and 0xFF. If you mix and match these for red, green, and blue values, you come up with 6 * 6 * 6 = 216 different colors. This happens to be identical to the web-safe color palette.

    The first (0th) entry in the table is (FF, FF, FF), followed by (FF, FF, CC), (FF, FF, 99), and on down to (FF, FF, 00) when the green component gets knocked down and step and the next color is (FF, CC, FF). The first 36 palette entries in the table all have a red component of 0xFF. Thus, if an input RGB pixel has a red color closest to 0xFF, it must map to one of those first 36 entries.

    I created a table which maps indices 0..215 to values from 5..0. Each of the R, G, and B components of an input pixel are used to index into this table and derive 3 indices ri, gi, and bi. Finally, the index into the palette table is given by :

      index = ri * 36 + gi * 6 + bi
    

    For example, the pixel (0xFE, 0xFE, 0x01) would yield ri, gi, and bi values of 0, 0, and 5. Therefore :

      index = 0 * 36 + 0 * 6 + 5
    

    The palette index is 5, which maps to color (0xFF, 0xFF, 0x00).

    Validation
    So I was pretty pleased with myself for coming up with that. Now, ideally, swapping out one algorithm for another in my SMC encoder should yield identical results. That wasn’t the case, initially.

    One problem is that the regulation QuickTime palette actually has 40 more entries above and beyond the typical 216-entry color cube (rounding out the grand total of 256 colors). Thus, using the distance approach with the full default table provides for a little more accuracy.

    However, there still seems to be a problem. Let’s check our old standby, the Big Buck Bunny logo image :



    Distance approach using the full 256-color QuickTime default palette


    Distance approach using the 216-color palette


    Table lookup approach using the 216-color palette

    I can’t quite account for that big red splotch there. That’s the most notable difference between images 1 and 2 and the only visible difference between images 2 and 3.

    To prove to myself that the distance approach is equivalent to the table approach, I wrote a Python script to iterate through all possible RGB combinations and verify the equivalence. If you’re not up on your base 2 math, that’s 224 or 16,777,216 colors to run through. I used Python’s multiprocessing module to great effect and really maximized a Core i7 CPU with 8 hardware threads.

    So I’m confident that the palette conversion techniques are sound. The red spot is probably attributable to a bug in my WIP SMC encoder.

    Source Code
    Update August 23, 2011 : Here’s the Python code I used for proving equivalence between the 2 approaches. In terms of leveraging multiple CPUs, it’s possibly the best program I have written to date.

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. from multiprocessing import Pool
    4.  
    5. palette = []
    6. pal8_table = []
    7.  
    8. def process_r(r) :
    9.  counts = []
    10.  
    11.  for i in xrange(216) :
    12.   counts.append(0)
    13.  
    14.  print "r = %d" % (r)
    15.  for g in xrange(256) :
    16.   for b in xrange(256) :
    17.    min_dsqrd = 0xFFFFFFFF
    18.    best_index = 0
    19.    for i in xrange(len(palette)) :
    20.     dr = palette[i][0] - r
    21.     dg = palette[i][1] - g
    22.     db = palette[i][2] - b
    23.     dsqrd = dr * dr + dg * dg + db * db
    24.     if dsqrd <min_dsqrd :
    25.      min_dsqrd = dsqrd
    26.      best_index = i
    27.    counts[best_index] += 1
    28.  
    29.    # check if the distance approach deviates from the table-based approach
    30.    i = best_index
    31.    r = palette[i][0]
    32.    g = palette[i][1]
    33.    b = palette[i][2]
    34.    ri = pal8_table[r]
    35.    gi = pal8_table[g]
    36.    bi = pal8_table[b]
    37.    table_index = ri * 36 + gi * 6 + bi ;
    38.    if table_index != best_index :
    39.     print "(0x%02X 0x%02X 0x%02X) : distance index = %d, table index = %d" % (r, g, b, best_index, table_index)
    40.  
    41.  return counts
    42.  
    43. if __name__ == ’__main__’ :
    44.  counts = []
    45.  for i in xrange(216) :
    46.   counts.append(0)
    47.  
    48.  # initialize reference palette
    49.  color_steps = [ 0xFF, 0xCC, 0x99, 0x66, 0x33, 0x00 ]
    50.  for r in color_steps :
    51.   for g in color_steps :
    52.    for b in color_steps :
    53.     palette.append([r, g, b])
    54.  
    55.  # initialize palette conversion table
    56.  for i in range(0, 26) :
    57.   pal8_table.append(5)
    58.  for i in range(26, 77) :
    59.   pal8_table.append(4)
    60.  for i in range(77, 128) :
    61.   pal8_table.append(3)
    62.  for i in range(128, 179) :
    63.   pal8_table.append(2)
    64.  for i in range(179, 230) :
    65.   pal8_table.append(1)
    66.  for i in range(230, 256) :
    67.   pal8_table.append(0)
    68.  
    69.  # create a pool of worker threads and break up the overall job
    70.  pool = Pool()
    71.  it = pool.imap_unordered(process_r, range(256))
    72.  try :
    73.   while 1 :
    74.    partial_counts = it.next()
    75.    for i in xrange(216) :
    76.     counts[i] += partial_counts[i]
    77.  except StopIteration :
    78.   pass
    79.  
    80.  print "index, count, red, green, blue"
    81.  for i in xrange(len(counts)) :
    82.   print "%d, %d, %d, %d, %d" % (i, counts[i], palette[i][0], palette[i][1], palette[i][2])