
Recherche avancée
Autres articles (63)
-
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (14018)
-
Transcode video using celery and ffmpeg in django
28 octobre 2015, par RobinI would like to transcode user uploaded videos using celery. I think first I should upload the video, and spawn a celery task for transcoding.
Maybe something like this in the tasks.py :
subprocess.call('ffmpeg -i path/.../original path/.../output')
Just completed First steps with celery, so confused how to do so in the
views.py
andtasks.py
. Also is it a good solution ? I would really appreciate your help and advice. Thank you.models.py :
class Video(models.Model):
user = models.ForeignKey(User)
title = models.CharField(max_length=100)
original = models.FileField(upload_to=get_upload_file_name)
mp4_480 = models.FileField(upload_to=get_upload_file_name, blank=True, null=True)
mp4_720 = models.FileField(upload_to=get_upload_file_name, blank=True, null=True)
privacy = models.CharField(max_length=1,choices=PRIVACY, default='F')
pub_date = models.DateTimeField(auto_now_add=True, auto_now=False)my incomplete views.py :
@login_required
def upload_video(request):
if request.method == 'POST':
form = VideoForm(request.POST, request.FILES)
if form.is_valid():
if form.cleaned_data:
user = request.user
#
#
# No IDEA WHAT TO DO NEXT
#
#
return HttpResponseRedirect('/')
else:
form = VideoForm()
return render(request, 'upload_video.html', {
'form':form
}) -
Revision 1fc70e47cd : Adds code for corner detection and ransac This code is to start experiments wit
7 février 2015, par Deb MukherjeeChanged Paths :
Add /vp9/encoder/vp9_corner_detect.c
Add /vp9/encoder/vp9_corner_detect.h
Add /vp9/encoder/vp9_corner_match.c
Add /vp9/encoder/vp9_corner_match.h
Add /vp9/encoder/vp9_global_motion.c
Add /vp9/encoder/vp9_global_motion.h
Add /vp9/encoder/vp9_ransac.c
Add /vp9/encoder/vp9_ransac.h
Modify /vp9/vp9cx.mk
Adds code for corner detection and ransacThis code is to start experiments with global motion models.
The corner detection can be either fast_9 or Harris.
Corner matching is currently based on normalized correlation.
Three flavors of ransac are used to estimate either a
homography (8-param), or an affine model (6-param) or a
rotation-zoom only affine model (4-param).The highest level API for the library is in vp9_global_motion.h,
where there are two functions - one for computing a single model
and another for computing multiple models up to a maximum number
provided or until a desired inlier probability is achieved.Change-Id : I3f9788ec2dc0635cbc65f5c66c6ea8853cfcf2dd
-
Streaming client over TCP and RTSP through Wi-Fi or LAN in Android
6 janvier 2015, par GowthamI am struggling to develop streaming client for DVR camera’s, I tried with VLC Media player through RTSP protocol I got the solution (used Wi-Fi standard model like, Netgear etc.,), but the same code is not supporting for other Wi-Fi Modem’s, now am working with FFMPEG framework to implement the streaming client in android using JNI API. Not getting any proper idea to implement JNI api
Network Camera working with IP Cam Viewer App
code below,
/*****************************************************/
/* functional call */
/*****************************************************/
jboolean Java_FFmpeg_allocateBuffer( JNIEnv* env, jobject thiz )
{
// Allocate an AVFrame structure
pFrameRGB=avcodec_alloc_frame();
if(pFrameRGB==NULL)
return 0;
sprintf(debugMsg, "%d %d", screenWidth, screenHeight);
INFO(debugMsg);
// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(dstFmt, screenWidth, screenHeight);
/*
numBytes=avpicture_get_size(dstFmt, pCodecCtx->width,
pCodecCtx->height);
*/
buffer=(uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, dstFmt, screenWidth, screenHeight);
return 1;
}
/* for each decoded frame */
jbyteArray Java_FFmpeg_getNextDecodedFrame( JNIEnv* env, jobject thiz )
{
av_free_packet(&packet);
while(av_read_frame(pFormatCtx, &packet)>=0) {
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, screenWidth, screenHeight, dstFmt, SWS_BICUBIC, NULL, NULL, NULL);
/*
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, dstFmt, SWS_BICUBIC, NULL, NULL, NULL);
*/
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize,
0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
++frameCount;
/* uint8_t == unsigned 8 bits == jboolean */
jbyteArray nativePixels = (*env)->NewByteArray(env, numBytes);
(*env)->SetByteArrayRegion(env, nativePixels, 0, numBytes, buffer);
return nativePixels;
}
}
av_free_packet(&packet);
}
return NULL;
}
/*****************************************************/
/* / functional call */
/*****************************************************/
jstring
Java_FFmpeg_play( JNIEnv* env, jobject thiz, jstring jfilePath )
{
INFO("--- Play");
char* filePath = (char *)(*env)->GetStringUTFChars(env, jfilePath, NULL);
RE(filePath);
/*****************************************************/
AVFormatContext *pFormatCtx;
int i, videoStream;
AVCodecContext *pCodecCtx;
AVCodec *pCodec;
AVFrame *pFrame;
AVPacket packet;
int frameFinished;
float aspect_ratio;
struct SwsContext *img_convert_ctx;
INFO(filePath);
/* FFmpeg */
av_register_all();
if(av_open_input_file(&pFormatCtx, filePath, NULL, 0, NULL)!=0)
RE("failed av_open_input_file ");
if(av_find_stream_info(pFormatCtx)<0)
RE("failed av_find_stream_info");
videoStream=-1;
for(i=0; inb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i;
break;
}
if(videoStream==-1)
RE("failed videostream == -1");
pCodecCtx=pFormatCtx->streams[videoStream]->codec;
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) {
RE("Unsupported codec!");
}
if(avcodec_open(pCodecCtx, pCodec)<0)
RE("failed codec_open");
pFrame=avcodec_alloc_frame();
/* /FFmpeg */
INFO("codec name:");
INFO(pCodec->name);
INFO("Getting into stream decode:");
/* video stream */
i=0;
while(av_read_frame(pFormatCtx, &packet)>=0) {
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
++i;
INFO("frame finished");
AVPicture pict;
/*
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize,
0, pCodecCtx->height, pict.data, pict.linesize);
*/
}
}
av_free_packet(&packet);
}
/* /video stream */
av_free(pFrame);
avcodec_close(pCodecCtx);
av_close_input_file(pFormatCtx);
RE("end of main");
}I can’t able to get the frames from Network camera
And give some idea to implement the live stream client for DVR camera in Android