
Recherche avancée
Autres articles (64)
-
Le plugin : Podcasts.
14 juillet 2010, parLe problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
Types de fichiers supportés dans les flux
Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...) -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
XMP PHP
13 mai 2011, parDixit Wikipedia, XMP signifie :
Extensible Metadata Platform ou XMP est un format de métadonnées basé sur XML utilisé dans les applications PDF, de photographie et de graphisme. Il a été lancé par Adobe Systems en avril 2001 en étant intégré à la version 5.0 d’Adobe Acrobat.
Étant basé sur XML, il gère un ensemble de tags dynamiques pour l’utilisation dans le cadre du Web sémantique.
XMP permet d’enregistrer sous forme d’un document XML des informations relatives à un fichier : titre, auteur, historique (...)
Sur d’autres sites (7714)
-
av1/h264_metadata, filter_units : Count down when deleting units
17 juin 2019, par Andreas Rheinhardtav1/h264_metadata, filter_units : Count down when deleting units
When testing whether a particular unit should be kept or discarded, it
is best to start at the very last unit of a fragment and count down,
because that way a unit that will eventually be deleted won't be
memmoved during earlier deletions ; and frag/au->nb_units need only be
evaluated once in this case and the counter is automatically correct
when a unit got deleted.It also works for double loops, i.e. when looping over all SEI messages
in all SEI units of an access unit.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
truehd_core : Miscellaneous improvements
6 juillet 2019, par Andreas Rheinhardttruehd_core : Miscellaneous improvements
1. The loop counter of the substream_directory loop is always less than
the number of substreams, yet within the loop it is checked whether it
is less than FFMIN(3, s->hdr.num_substreams), although the check for < 3
would suffice.
2. In case the packet is a major sync packet, the last two bytes of the
major sync structure were initialized to 0xff and then immediately
overwritten afterwards without ever making use of the values just set.
3. When updating the parity_nibble during writing the new
substream_directory, the parity_nibble is updated one byte at a time
with bytes that might be read from the output packet's data. But one can
do both bytes at the same time without resorting to the data just
written by XOR'ing with the variable that contains the value that has
just been written as a big endian number. This changes the intermediate
value of parity_nibble, but in the end it just amounts to a reordering
of the sum modulo two that will eventually be written as parity_nibble.
Due to associativity and commutativity, this value is unchanged.
4. init_get_bits8 already checks that no overflow happens during the
conversion of its argument from bytes to bits. ff_mlp_read_major_sync
makes sure not to overread (the maximum size of a major_sync_info is 60
bytes anyway) and last_offset is < 2^13, so that no overflow in the
calculation of size can happen, i.e. the check for whether size is >= 0
is unnecessary. But then size is completely unnecessary and can be
removed.
5. In case the packet is just passed through, it is unnecessary to read
the packet's dts. This is therefore postponed to when we know that the
packet is not passed through.
6. Given that it seems overkill to use a bitreader just for one
variable, the size of the input access unit is now read directly.
7. A substream's offset (of the end of the substream) is now stored as is
(i.e. in units of words).These changes amount to a slight performance improvement : It improved
from 5897 decicycles of ten runs with about 262144 runs each (including
an insignificant amount — about 20-25 usually of skips) to 5747
decicycles under the same conditions.Signed-off-by : Andreas Rheinhardt <andreas.rheinhardt@gmail.com>
-
How to convert bitmaps array to a video in Android ?
12 juillet 2020, par Kamran JanjuaI have a buffer which is filled with image bitmaps as they arrive (using a thread to continuously take pictures). I would then like to dump that bitmap buffer (I have a hashmap at the moment for matching the keys) into a
.mp4
file.

Here is the code to continuously capture the images using a handler.


button.setOnClickListener {
 prepareUIForCapture()
 if(isRunning){
 handler.removeCallbacksAndMessages(null)
 Logd("Length of wide: " + MainActivity.wideBitmaps.size)
 Logd("Length of normal: " + MainActivity.normalBitmaps.size)
 // This is where the make video would be called => makeVideoFootage()
 restartActivity()
 }else{
 button.text = "Stop"
 handler.postDelayed(object : Runnable {
 override fun run(){
 twoLens.reset()
 twoLens.isTwoLensShot = true
 MainActivity.cameraParams.get(dualCamLogicalId).let {
 if (it?.isOpen == true) {
 Logd("In onClick. Taking Dual Cam Photo on logical camera: " + dualCamLogicalId)
 takePicture(this@MainActivity, it)
 Toast.makeText(applicationContext, "Captured", Toast.LENGTH_LONG).show()
 }
 }
 handler.postDelayed(this, 1000)
 }
 }, 1000)
 }
 isRunning = !isRunning
 }



This takes picture every 1 second until the stop button is pressed. Here is the function that retrieves the images and saves them into a hashmap.


val wideBuffer: ByteBuffer? = twoLens.wideImage!!.planes[0].buffer
val wideBytes = ByteArray(wideBuffer!!.remaining())
wideBuffer.get(wideBytes)

val normalBuffer: ByteBuffer? = twoLens.normalImage!!.planes[0].buffer
val normalBytes = ByteArray(normalBuffer!!.remaining())
normalBuffer.get(normalBytes)

val tempWideBitmap = BitmapFactory.decodeByteArray(wideBytes, 0, wideBytes.size, null)
val tempNormalBitmap = BitmapFactory.decodeByteArray(normalBytes, 0, normalBytes.size, null)
MainActivity.counter += 1
MainActivity.wideBitmaps.put(MainActivity.counter.toString(), tempWideBitmap)
MainActivity.normalBitmaps.put(MainActivity.counter.toString(), tempNormalBitmap)



counter
is used to match the frames and that is why I am using a hashmap instead of an array. I have complied the ffmpeg as follows.

implementation 'com.writingminds:FFmpegAndroid:0.3.2'




Is this the correct way ?
I would appreciate some starter code in
makeVideoFootage()
.

fun makeVideoFootage(){
// I would like to get the bitmaps from MainActivity.wideBitmaps and then dump them into a video wide.mp4.
}



Any help regarding this would be appreciated.


P.S. I have read the existing questions and their answers (running from the command line), but I do not know how to proceed.