Recherche avancée

Médias (0)

Mot : - Tags -/diogene

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (64)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • XMP PHP

    13 mai 2011, par

    Dixit Wikipedia, XMP signifie :
    Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
    Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
    XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)

Sur d’autres sites (7714)

  • av1/h264_metadata, filter_units : Count down when deleting units

    17 juin 2019, par Andreas Rheinhardt
    av1/h264_metadata, filter_units : Count down when deleting units
    

    When testing whether a particular unit should be kept or discarded, it
    is best to start at the very last unit of a fragment and count down,
    because that way a unit that will eventually be deleted won't be
    memmoved during earlier deletions ; and frag/au->nb_units need only be
    evaluated once in this case and the counter is automatically correct
    when a unit got deleted.

    It also works for double loops, i.e. when looping over all SEI messages
    in all SEI units of an access unit.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavcodec/av1_metadata_bsf.c
    • [DH] libavcodec/filter_units_bsf.c
    • [DH] libavcodec/h264_metadata_bsf.c
  • truehd_core : Miscellaneous improvements

    6 juillet 2019, par Andreas Rheinhardt
    truehd_core : Miscellaneous improvements
    

    1. The loop counter of the substream_directory loop is always less than
    the number of substreams, yet within the loop it is checked whether it
    is less than FFMIN(3, s->hdr.num_substreams), although the check for < 3
    would suffice.
    2. In case the packet is a major sync packet, the last two bytes of the
    major sync structure were initialized to 0xff and then immediately
    overwritten afterwards without ever making use of the values just set.
    3. When updating the parity_nibble during writing the new
    substream_directory, the parity_nibble is updated one byte at a time
    with bytes that might be read from the output packet's data. But one can
    do both bytes at the same time without resorting to the data just
    written by XOR'ing with the variable that contains the value that has
    just been written as a big endian number. This changes the intermediate
    value of parity_nibble, but in the end it just amounts to a reordering
    of the sum modulo two that will eventually be written as parity_nibble.
    Due to associativity and commutativity, this value is unchanged.
    4. init_get_bits8 already checks that no overflow happens during the
    conversion of its argument from bytes to bits. ff_mlp_read_major_sync
    makes sure not to overread (the maximum size of a major_sync_info is 60
    bytes anyway) and last_offset is < 2^13, so that no overflow in the
    calculation of size can happen, i.e. the check for whether size is >= 0
    is unnecessary. But then size is completely unnecessary and can be
    removed.
    5. In case the packet is just passed through, it is unnecessary to read
    the packet's dts. This is therefore postponed to when we know that the
    packet is not passed through.
    6. Given that it seems overkill to use a bitreader just for one
    variable, the size of the input access unit is now read directly.
    7. A substream's offset (of the end of the substream) is now stored as is
    (i.e. in units of words).

    These changes amount to a slight performance improvement : It improved
    from 5897 decicycles of ten runs with about 262144 runs each (including
    an insignificant amount — about 20-25 usually of skips) to 5747
    decicycles under the same conditions.

    Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>

    • [DH] libavcodec/truehd_core_bsf.c
  • How to convert bitmaps array to a video in Android ?

    12 juillet 2020, par Kamran Janjua

    I have a buffer which is filled with image bitmaps as they arrive (using a thread to continuously take pictures). I would then like to dump that bitmap buffer (I have a hashmap at the moment for matching the keys) into a .mp4 file.

    &#xA;

    Here is the code to continuously capture the images using a handler.

    &#xA;

    button.setOnClickListener {&#xA; prepareUIForCapture()&#xA; if(isRunning){&#xA;   handler.removeCallbacksAndMessages(null)&#xA;   Logd("Length of wide: " &#x2B; MainActivity.wideBitmaps.size)&#xA;   Logd("Length of normal: " &#x2B; MainActivity.normalBitmaps.size)&#xA;   // This is where the make video would be called => makeVideoFootage()&#xA;   restartActivity()&#xA; }else{&#xA;    button.text = "Stop"&#xA;    handler.postDelayed(object : Runnable {&#xA;    override fun run(){&#xA;       twoLens.reset()&#xA;       twoLens.isTwoLensShot = true&#xA;       MainActivity.cameraParams.get(dualCamLogicalId).let {&#xA;                if (it?.isOpen == true) {&#xA;                Logd("In onClick. Taking Dual Cam Photo on logical camera: " &#x2B; dualCamLogicalId)&#xA;                takePicture(this@MainActivity, it)&#xA;                Toast.makeText(applicationContext, "Captured", Toast.LENGTH_LONG).show()&#xA;                            }&#xA;                        }&#xA;                        handler.postDelayed(this, 1000)&#xA;                    }&#xA;                }, 1000)&#xA;            }&#xA;            isRunning = !isRunning&#xA;        }&#xA;

    &#xA;

    This takes picture every 1 second until the stop button is pressed. Here is the function that retrieves the images and saves them into a hashmap.

    &#xA;

    val wideBuffer: ByteBuffer? = twoLens.wideImage!!.planes[0].buffer&#xA;val wideBytes = ByteArray(wideBuffer!!.remaining())&#xA;wideBuffer.get(wideBytes)&#xA;&#xA;val normalBuffer: ByteBuffer? = twoLens.normalImage!!.planes[0].buffer&#xA;val normalBytes = ByteArray(normalBuffer!!.remaining())&#xA;normalBuffer.get(normalBytes)&#xA;&#xA;val tempWideBitmap = BitmapFactory.decodeByteArray(wideBytes, 0, wideBytes.size, null)&#xA;val tempNormalBitmap = BitmapFactory.decodeByteArray(normalBytes, 0, normalBytes.size, null)&#xA;MainActivity.counter &#x2B;= 1&#xA;MainActivity.wideBitmaps.put(MainActivity.counter.toString(), tempWideBitmap)&#xA;MainActivity.normalBitmaps.put(MainActivity.counter.toString(), tempNormalBitmap)&#xA;

    &#xA;

    counter is used to match the frames and that is why I am using a hashmap instead of an array. I have complied the ffmpeg as follows.

    &#xA;

    implementation &#x27;com.writingminds:FFmpegAndroid:0.3.2&#x27;&#xA;&#xA;

    &#xA;

    Is this the correct way ?&#xA;I would appreciate some starter code in makeVideoFootage().

    &#xA;

    fun makeVideoFootage(){&#xA;// I would like to get the bitmaps from MainActivity.wideBitmaps and then dump them into a video wide.mp4.&#xA;}&#xA;

    &#xA;

    Any help regarding this would be appreciated.

    &#xA;

    P.S. I have read the existing questions and their answers (running from the command line), but I do not know how to proceed.

    &#xA;