Recherche avancée

Médias (1)

Mot : - Tags -/book

Autres articles (64)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (5742)

  • FFMpeg/video4linux2,v4l2 : The v4l2 frame is 46448 bytes, but 153600 bytes are expected (Webcam capture not fast enough ?)

    30 août 2013, par user763410

    I am trying to capture webcam output in linux/ubuntu. I have a chicony webcam (lenovo laptop). I am running inside a VMWARE virtual machine. The capture is not proceeding beyond 10 seconds. can you please help.

    The command I used is :

    $ ffmpeg -y  -f video4linux2 -r 20 -s 160x120 -i /dev/video0 -acodec libfaac -ab 128k  /tmp/web.avi

    The most important message I am getting is :

    [video4linux2,v4l2 @ 0x9e43fa0] The v4l2 frame is 46448 bytes, but 153600 bytes are expected

    Complete message from ffmpeg :

    ffmpeg version N-55159-gf118b41 Copyright (c) 2000-2013 the FFmpeg developers
     built on Aug 18 2013 09:09:13 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5)
     configuration: --enable-libass --prefix=/opt/ffmpeg --enable-debug --enable-libfreetype
     libavutil      52. 40.100 / 52. 40.100
     libavcodec     55. 19.100 / 55. 19.100
     libavformat    55. 12.102 / 55. 12.102
     libavdevice    55.  3.100 / 55.  3.100
     libavfilter     3. 82.100 /  3. 82.100
     libswscale      2.  4.100 /  2.  4.100
     libswresample   0. 17.103 /  0. 17.103
    [video4linux2,v4l2 @ 0x9e43fa0] The V4L2 driver changed the video from 160x120 to 320x240
    [video4linux2,v4l2 @ 0x9e43fa0] The driver changed the time per frame from 1/20 to 1/15
    Input #0, video4linux2,v4l2, from '/dev/video0':
     Duration: N/A, start: 6424.338678, bitrate: 18432 kb/s
       Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 320x240, 18432 kb/s, 15 fps, 15 tbr, 1000k tbn, 1000k tbc
    Codec AVOption ab (set bitrate (in bits/s)) specified for output file #0 (/tmp/web.avi) has not been used for any stream. The most likely reason is either wrong type (e.g. a video option with no video streams) or that it is a private option of some encoder which was not actually used for any stream.
    Output #0, avi, to '/tmp/web.avi':
     Metadata:
       ISFT            : Lavf55.12.102
       Stream #0:0: Video: mpeg4 (FMP4 / 0x34504D46), yuv420p, 320x240, q=2-31, 200 kb/s, 20 tbn, 20 tbc
    Stream mapping:
     Stream #0:0 -> #0:0 (rawvideo -> mpeg4)
    Press [q] to stop, [?] for help
    [video4linux2,v4l2 @ 0x9e43fa0] The v4l2 frame is 46448 bytes, but 153600 bytes are expected
    /dev/video0: Invalid data found when processing input
    frame=   29 fps= 14 q=3.5 Lsize=      87kB time=00:00:01.45 bitrate= 490.0kbits/s    
    video:80kB audio:0kB subtitle:0 global headers:0kB muxing overhead 7.760075%
    [video4linux2,v4l2 @ 0x9e43fa0] Some buffers are still owned by the caller on close.
  • Basic Video Palette Conversion

    20 août 2011, par Multimedia Mike — General, Python

    How do you take a 24-bit RGB image and convert it to an 8-bit paletted image for the purpose of compression using a codec that requires 8-bit input images ? Seems simple enough and that’s what I’m tackling in this post.

    Ask FFmpeg/Libav To Do It
    Ideally, FFmpeg / Libav should be able to handle this automatically. Indeed, FFmpeg used to be able to, at least at the time I wrote this post about ZMBV and was unhappy with FFmpeg’s default results. Somewhere along the line, FFmpeg and Libav lost the ability to do this. I suspect it got removed during some swscale refactoring.

    Still, there’s no telling if the old system would have computed palettes correctly for QuickTime files.

    Distance Approach
    When I started writing my SMC video encoder, I needed to convert RGB (from PNG files) to PAL8 colorspace. The path of least resistance was to match the pixels in the input image to the default 256-color palette that QuickTime assumes (and is hardcoded into FFmpeg/Libav).

    How to perform the matching ? Find the palette entry that is closest to a given input pixel, where "closest" is the minimum distance as computed by the usual distance formula (square root of the sum of the squares of the diffs of all the components).



    That means for each pixel in an image, check the pixel against 256 palette entries (early termination is possible if an acceptable threshold is met). As you might imagine, this can be a bit time-consuming. I wondered about a faster approach...

    Lookup Table
    I think this is the approach that FFmpeg used to use, but I went and derived it for myself after studying the default QuickTime palette table. There’s a pattern there— all of the RGB entries are comprised of combinations of 6 values — 0x00, 0x33, 0x66, 0x99, 0xCC, and 0xFF. If you mix and match these for red, green, and blue values, you come up with 6 * 6 * 6 = 216 different colors. This happens to be identical to the web-safe color palette.

    The first (0th) entry in the table is (FF, FF, FF), followed by (FF, FF, CC), (FF, FF, 99), and on down to (FF, FF, 00) when the green component gets knocked down and step and the next color is (FF, CC, FF). The first 36 palette entries in the table all have a red component of 0xFF. Thus, if an input RGB pixel has a red color closest to 0xFF, it must map to one of those first 36 entries.

    I created a table which maps indices 0..215 to values from 5..0. Each of the R, G, and B components of an input pixel are used to index into this table and derive 3 indices ri, gi, and bi. Finally, the index into the palette table is given by :

      index = ri * 36 + gi * 6 + bi
    

    For example, the pixel (0xFE, 0xFE, 0x01) would yield ri, gi, and bi values of 0, 0, and 5. Therefore :

      index = 0 * 36 + 0 * 6 + 5
    

    The palette index is 5, which maps to color (0xFF, 0xFF, 0x00).

    Validation
    So I was pretty pleased with myself for coming up with that. Now, ideally, swapping out one algorithm for another in my SMC encoder should yield identical results. That wasn’t the case, initially.

    One problem is that the regulation QuickTime palette actually has 40 more entries above and beyond the typical 216-entry color cube (rounding out the grand total of 256 colors). Thus, using the distance approach with the full default table provides for a little more accuracy.

    However, there still seems to be a problem. Let’s check our old standby, the Big Buck Bunny logo image :



    Distance approach using the full 256-color QuickTime default palette


    Distance approach using the 216-color palette


    Table lookup approach using the 216-color palette

    I can’t quite account for that big red splotch there. That’s the most notable difference between images 1 and 2 and the only visible difference between images 2 and 3.

    To prove to myself that the distance approach is equivalent to the table approach, I wrote a Python script to iterate through all possible RGB combinations and verify the equivalence. If you’re not up on your base 2 math, that’s 224 or 16,777,216 colors to run through. I used Python’s multiprocessing module to great effect and really maximized a Core i7 CPU with 8 hardware threads.

    So I’m confident that the palette conversion techniques are sound. The red spot is probably attributable to a bug in my WIP SMC encoder.

    Source Code
    Update August 23, 2011 : Here’s the Python code I used for proving equivalence between the 2 approaches. In terms of leveraging multiple CPUs, it’s possibly the best program I have written to date.

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. from multiprocessing import Pool
    4.  
    5. palette = []
    6. pal8_table = []
    7.  
    8. def process_r(r) :
    9.  counts = []
    10.  
    11.  for i in xrange(216) :
    12.   counts.append(0)
    13.  
    14.  print "r = %d" % (r)
    15.  for g in xrange(256) :
    16.   for b in xrange(256) :
    17.    min_dsqrd = 0xFFFFFFFF
    18.    best_index = 0
    19.    for i in xrange(len(palette)) :
    20.     dr = palette[i][0] - r
    21.     dg = palette[i][1] - g
    22.     db = palette[i][2] - b
    23.     dsqrd = dr * dr + dg * dg + db * db
    24.     if dsqrd <min_dsqrd :
    25.      min_dsqrd = dsqrd
    26.      best_index = i
    27.    counts[best_index] += 1
    28.  
    29.    # check if the distance approach deviates from the table-based approach
    30.    i = best_index
    31.    r = palette[i][0]
    32.    g = palette[i][1]
    33.    b = palette[i][2]
    34.    ri = pal8_table[r]
    35.    gi = pal8_table[g]
    36.    bi = pal8_table[b]
    37.    table_index = ri * 36 + gi * 6 + bi ;
    38.    if table_index != best_index :
    39.     print "(0x%02X 0x%02X 0x%02X) : distance index = %d, table index = %d" % (r, g, b, best_index, table_index)
    40.  
    41.  return counts
    42.  
    43. if __name__ == ’__main__’ :
    44.  counts = []
    45.  for i in xrange(216) :
    46.   counts.append(0)
    47.  
    48.  # initialize reference palette
    49.  color_steps = [ 0xFF, 0xCC, 0x99, 0x66, 0x33, 0x00 ]
    50.  for r in color_steps :
    51.   for g in color_steps :
    52.    for b in color_steps :
    53.     palette.append([r, g, b])
    54.  
    55.  # initialize palette conversion table
    56.  for i in range(0, 26) :
    57.   pal8_table.append(5)
    58.  for i in range(26, 77) :
    59.   pal8_table.append(4)
    60.  for i in range(77, 128) :
    61.   pal8_table.append(3)
    62.  for i in range(128, 179) :
    63.   pal8_table.append(2)
    64.  for i in range(179, 230) :
    65.   pal8_table.append(1)
    66.  for i in range(230, 256) :
    67.   pal8_table.append(0)
    68.  
    69.  # create a pool of worker threads and break up the overall job
    70.  pool = Pool()
    71.  it = pool.imap_unordered(process_r, range(256))
    72.  try :
    73.   while 1 :
    74.    partial_counts = it.next()
    75.    for i in xrange(216) :
    76.     counts[i] += partial_counts[i]
    77.  except StopIteration :
    78.   pass
    79.  
    80.  print "index, count, red, green, blue"
    81.  for i in xrange(len(counts)) :
    82.   print "%d, %d, %d, %d, %d" % (i, counts[i], palette[i][0], palette[i][1], palette[i][2])
  • ffmpeg blend filter not work right in android

    15 mai 2018, par tainguyen

    I’m doing my android project which uses blend filter to create uncover down video transition. My test device is Samsung S4 (Android version : 4.4.2)
    this is my command string :

    ffmpeg
    -loop 1 -t 1 -i img001.jpg
    -loop 1 -t 1 -i img002.jpg
    -loop 1 -t 1 -i img003.jpg
    -loop 1 -t 1 -i img004.jpg
    -loop 1 -t 1 -i img005.jpg
    -filter_complex
    "[1:v][0:v]blend=all_expr='if(lte(Y,N*H/24),A,B)'[b1v];
    [2:v][1:v]blend=all_expr='if(lte(Y,H*N/24),A,B)'[b2v];
    [3:v][2:v]blend=all_expr='if(lte(Y,H*N/24),A,B)'[b3v];
    [4:v][3:v]blend=all_expr='if(lte(Y,H*N/24),A,B)'[b4v];
    [0:v][b1v][1:v][b2v][2:v][b3v][3:v][b4v]
    [4:v]concat=n=9:v=1:a=0,format=yuv420p[v]" -map "[v]" out_cover_top.mp4 -y

    The result video transition is not scroll from the top to the bottom as I want. It break into several parts ( the red arrow show you the top and bottom of an part).
    enter image description here

    Help me please.