
Recherche avancée
Autres articles (58)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (14049)
-
OpenCV cv2 not working in Windows 7
6 septembre 2014, par Subhendu Sinha ChaudhuriI have a Windows 7 SP1 64 Bit machine with open cv2.4.9 installed and python 2.7.6 installed.
I use pre compiled version of opencvThe following code works perfectly for me
import cv2.cv as cv
import time
cv.NamedWindow("camera", 0)
capture = cv.CaptureFromCAM(0)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("camera", img)
if cv.WaitKey(10) == 27:
break
cv.DestroyAllWindows()Now when I try to use this code
import cv2
import numpy as np
cam = cv2.VideoCapture(0)
s, img = cam.read()
winName = "Movement Indicator"
cv2.namedWindow(winName, cv2.CV_WINDOW_AUTOSIZE)
while s:
cv2.imshow( winName,img )
s, img = cam.read()
key = cv2.waitKey(10)
if key == 27:
cv2.destroyWindow(winName)
break
print "Goodbye"The window is opened , the camera is initialized (as camera lights are on) , but nothing is displayed and the window closes and the program exits.
WHERE am I going wrong ??
QUESTION 2
Can any one also suggest me how to capture live video stream from my Linux machine 192.168.1.3 . The stream is being generated by ffmpeg.The video stream can be opened in web browser. But I want to capture it with opencv and python.
-
ffmpeg live stream latency
22 août 2014, par Alex FuI’m currently working on live streaming video from device A (source) to device B (destination) directly via local WiFi network.
I’ve built FFMPEG to work on the Android platform and I have been able to stream video from
A -> B
successfully at the expense of latency (takes about 20 seconds for a movement or change to appear on screen ; as if the video was 20 seconds behind actual events).Initial start up is around 4 seconds. I’ve been able to trim that initial start up time down by lowering
probesize
andmax_analyze_duration
but the 20 second delay is still there.I’ve sprinkled some timing events around the code to try an figure out where the most time is being spent...
- naInit : 0.24575 sec
- naSetup : 0.043705 sec
The first video frame isn’t obtained until 0.035342 sec after the decodeAndRender function is called. Subsequent decoding times can be illustrated here :
http://jsfiddle.net/uff0jdf7/1/ (interactive graph)
From all the timing data i’ve recorded, nothing really jumps out at me unless I’m doing the timing wrong. Some have suggested that I am buffering too much data, however, as far as I can tell, I’m only buffering an image at a time. Is this too much ?
Also, the source video that’s coming in is in the format of P264 ; it’s a custom implementation of H264 apparently.
jint naSetup(JNIEnv *pEnv, jobject pObj, int pWidth, int pHeight) {
width = pWidth;
height = pHeight;
//create a bitmap as the buffer for frameRGBA
bitmap = createBitmap(pEnv, pWidth, pHeight);
if (AndroidBitmap_lockPixels(pEnv, bitmap, &pixel_buffer) < 0) {
LOGE("Could not lock bitmap pixels");
return -1;
}
//get the scaling context
sws_ctx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
pWidth, pHeight, AV_PIX_FMT_RGBA, SWS_BILINEAR, NULL, NULL, NULL);
// Assign appropriate parts of bitmap to image planes in pFrameRGBA
// Note that pFrameRGBA is an AVFrame, but AVFrame is a superset
// of AVPicture
av_image_fill_arrays(frameRGBA->data, frameRGBA->linesize, pixel_buffer, AV_PIX_FMT_RGBA, pWidth, pHeight, 1);
return 0;
}
void decodeAndRender(JNIEnv *pEnv) {
ANativeWindow_Buffer windowBuffer;
AVPacket packet;
AVPacket outputPacket;
int frame_count = 0;
int got_frame;
while (!stop && av_read_frame(formatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if (packet.stream_index == video_stream_index) {
// Decode video frame
avcodec_decode_video2(codecCtx, decodedFrame, &got_frame, &packet);
// Did we get a video frame?
if (got_frame) {
// Convert the image from its native format to RGBA
sws_scale(sws_ctx, (uint8_t const * const *) decodedFrame->data,
decodedFrame->linesize, 0, codecCtx->height, frameRGBA->data,
frameRGBA->linesize);
// lock the window buffer
if (ANativeWindow_lock(window, &windowBuffer, NULL) < 0) {
LOGE("Cannot lock window");
} else {
// draw the frame on buffer
int h;
for (h = 0; h < height; h++) {
memcpy(windowBuffer.bits + h * windowBuffer.stride * 4,
pixel_buffer + h * frameRGBA->linesize[0],
width * 4);
}
// unlock the window buffer and post it to display
ANativeWindow_unlockAndPost(window);
// count number of frames
++frame_count;
}
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
LOGI("Total # of frames decoded and rendered %d", frame_count);
} -
How to capture audio/video from two different browser windows
30 août 2018, par ColegatronI am writting an application that needs to run two browser windows and need to capture audio and video from from each of them in different output files.
It runs under linux (ubuntu18, pulseaudio)As per ffmpeg and xvfb documentation I see how to reproduce each window in a different framebuffer and capture the image getting different outputs.
But as far as I can see ffmpeg uses -i to specify the audio device to capture from, making both videos audio mixed. What I need is to specify an application/process/window.Question is : how can I capture also the audio for each browser window without mix both windows audio ?
Thanks a lot