
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (71)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
L’espace de configuration de MediaSPIP
29 novembre 2010, parL’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
Il permet de configurer finement votre site.
La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
Sur d’autres sites (7973)
-
OpenCV accept HTTP video streaming with different endpoints
22 juin 2019, par nabroyanI am writing a server, which receives live video stream over HTTP and performs some operations for different endpoints.
I am using
OpenCV
for that and my basic code looks somethings like thisvoid f1()
{
cv::VideoCapture cap("http://localhost:8080/operation1");
if (!cap.isOpened())
return;
for (;;)
{
cv::Mat frame;
cap >> frame;
// do something
}
}
void f2()
{
cv::VideoCapture cap("http://localhost:8080/operation2");
if (!cap.isOpened())
return;
for (;;)
{
cv::Mat frame;
cap >> frame;
// do something else
}
}
void f3() {...}As you see each function listens to a different HTTP endpoint and does different operation. Each function runs in a separate thread.
The problem is that for some reason
OpenCV
ignores the resource part in endpoint i.e. ifip:port
is correct it will accept the stream no matter it is fromoperation1
oroperation2
.For testing purposes I use this command as a stream generator
ffmpeg -i sample.mp4 -listen 1 -f matroska -c:v libx264 -preset fast -tune zerolatency http://localhost:8080/operation1
How can I make
OpenCV
to differentiate between HTTP endpoints ? What am I missing ? I have no any previous experience withOpenCV
and photo/video stuff. -
How to convert bitmaps array to a video in Android ?
12 juillet 2020, par Kamran JanjuaI have a buffer which is filled with image bitmaps as they arrive (using a thread to continuously take pictures). I would then like to dump that bitmap buffer (I have a hashmap at the moment for matching the keys) into a
.mp4
file.

Here is the code to continuously capture the images using a handler.


button.setOnClickListener {
 prepareUIForCapture()
 if(isRunning){
 handler.removeCallbacksAndMessages(null)
 Logd("Length of wide: " + MainActivity.wideBitmaps.size)
 Logd("Length of normal: " + MainActivity.normalBitmaps.size)
 // This is where the make video would be called => makeVideoFootage()
 restartActivity()
 }else{
 button.text = "Stop"
 handler.postDelayed(object : Runnable {
 override fun run(){
 twoLens.reset()
 twoLens.isTwoLensShot = true
 MainActivity.cameraParams.get(dualCamLogicalId).let {
 if (it?.isOpen == true) {
 Logd("In onClick. Taking Dual Cam Photo on logical camera: " + dualCamLogicalId)
 takePicture(this@MainActivity, it)
 Toast.makeText(applicationContext, "Captured", Toast.LENGTH_LONG).show()
 }
 }
 handler.postDelayed(this, 1000)
 }
 }, 1000)
 }
 isRunning = !isRunning
 }



This takes picture every 1 second until the stop button is pressed. Here is the function that retrieves the images and saves them into a hashmap.


val wideBuffer: ByteBuffer? = twoLens.wideImage!!.planes[0].buffer
val wideBytes = ByteArray(wideBuffer!!.remaining())
wideBuffer.get(wideBytes)

val normalBuffer: ByteBuffer? = twoLens.normalImage!!.planes[0].buffer
val normalBytes = ByteArray(normalBuffer!!.remaining())
normalBuffer.get(normalBytes)

val tempWideBitmap = BitmapFactory.decodeByteArray(wideBytes, 0, wideBytes.size, null)
val tempNormalBitmap = BitmapFactory.decodeByteArray(normalBytes, 0, normalBytes.size, null)
MainActivity.counter += 1
MainActivity.wideBitmaps.put(MainActivity.counter.toString(), tempWideBitmap)
MainActivity.normalBitmaps.put(MainActivity.counter.toString(), tempNormalBitmap)



counter
is used to match the frames and that is why I am using a hashmap instead of an array. I have complied the ffmpeg as follows.

implementation 'com.writingminds:FFmpegAndroid:0.3.2'




Is this the correct way ?
I would appreciate some starter code in
makeVideoFootage()
.

fun makeVideoFootage(){
// I would like to get the bitmaps from MainActivity.wideBitmaps and then dump them into a video wide.mp4.
}



Any help regarding this would be appreciated.


P.S. I have read the existing questions and their answers (running from the command line), but I do not know how to proceed.


-
How to add video play duration ffmpeg ? [duplicate]
2 août 2019, par Mostafa DaashThis question already has an answer here :
i want to add video duration stamp in
mp4
filelike this photo
https://i.ibb.co/m6cd63k/Untitled-1.jpgi use this code to output video file with watermark
ffmpeg -i 22.mp4 -i logo50.png -filter_complex "overlay=x=40:10:y=20" -preset medium -crf 24 -codec:a aac -b:a 128k -codec:v libx264 -pix_fmt yuv420p 22.xxxx.mp4