
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (88)
-
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Récupération d’informations sur le site maître à l’installation d’une instance
26 novembre 2010, parUtilité
Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)
Sur d’autres sites (10460)
-
avformat/iamf : fix setting channel layout for Scalable layers
17 juin, par James Almeravformat/iamf : fix setting channel layout for Scalable layers
The way streams are coded in an IAMF struct follows a scalable model where the
channel layouts for each layer may not match the channel order our API can
represent in a Native order layout.For example, an audio element may have six coded streams in the form of two
stereo streams, followed by two mono streams, and then by another two stereo
streams, for a total of 10 channels, and define for them four scalable layers
with loudspeaker_layout values "Stereo", "5.1ch", "5.1.2ch", and "5.1.4ch".
The first layer references the first stream, and each following layer will
reference all previous streams plus extra ones.
In this case, the "5.1ch" layer will reference four streams (the first two
stereo and the two mono) to encompass six channels, which does not match out
native layout 5.1(side) given that FC and LFE come after FL+FR but before
SL+SR, and here, they are at the end.For this reason, we need to build Custom order layouts that properly represent
what we're exporting.
Before :
Stream group #0:0[0x12c] : IAMF Audio Element :
Layer 0 : stereo
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Layer 1 : 5.1(side)
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Layer 2 : 5.1.2
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:4[0x4] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Layer 3 : 5.1.4
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:4[0x4] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:5[0x5] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
AFter :
Stream group #0:0[0x12c] : IAMF Audio Element :
Layer 0 : stereo
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Layer 1 : 6 channels (FL+FR+SL+SR+FC+LFE)
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Layer 2 : 8 channels (FL+FR+SL+SR+FC+LFE+TFL+TFR)
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:4[0x4] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Layer 3 : 10 channels (FL+FR+SL+SR+FC+LFE+TFL+TFR+TBL+TBR)
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:4[0x4] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:5[0x5] : Audio : opus, 48000 Hz, stereo, fltp (dependent)Signed-off-by : James Almer <jamrial@gmail.com>
- [DH] libavformat/iamf_parse.c
- [DH] libavformat/iamf_writer.c
- [DH] libavformat/iamfdec.c
- [DH] tests/ref/fate/iamf-5_1-copy
- [DH] tests/ref/fate/iamf-5_1-demux
- [DH] tests/ref/fate/iamf-5_1_4
- [DH] tests/ref/fate/iamf-7_1_4
- [DH] tests/ref/fate/iamf-9_1_6
- [DH] tests/ref/fate/mov-mp4-iamf-5_1_4
- [DH] tests/ref/fate/mov-mp4-iamf-7_1_4-video-first
- [DH] tests/ref/fate/mov-mp4-iamf-7_1_4-video-last
-
Streaming client over TCP and RTSP through Wi-Fi or LAN in Android
6 janvier 2015, par GowthamI am struggling to develop streaming client for DVR camera’s, I tried with VLC Media player through RTSP protocol I got the solution (used Wi-Fi standard model like, Netgear etc.,), but the same code is not supporting for other Wi-Fi Modem’s, now am working with FFMPEG framework to implement the streaming client in android using JNI API. Not getting any proper idea to implement JNI api
Network Camera working with IP Cam Viewer App
code below,
/*****************************************************/
/* functional call */
/*****************************************************/
jboolean Java_FFmpeg_allocateBuffer( JNIEnv* env, jobject thiz )
{
// Allocate an AVFrame structure
pFrameRGB=avcodec_alloc_frame();
if(pFrameRGB==NULL)
return 0;
sprintf(debugMsg, "%d %d", screenWidth, screenHeight);
INFO(debugMsg);
// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(dstFmt, screenWidth, screenHeight);
/*
numBytes=avpicture_get_size(dstFmt, pCodecCtx->width,
pCodecCtx->height);
*/
buffer=(uint8_t *)av_malloc(numBytes * sizeof(uint8_t));
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *)pFrameRGB, buffer, dstFmt, screenWidth, screenHeight);
return 1;
}
/* for each decoded frame */
jbyteArray Java_FFmpeg_getNextDecodedFrame( JNIEnv* env, jobject thiz )
{
av_free_packet(&packet);
while(av_read_frame(pFormatCtx, &packet)>=0) {
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, screenWidth, screenHeight, dstFmt, SWS_BICUBIC, NULL, NULL, NULL);
/*
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, dstFmt, SWS_BICUBIC, NULL, NULL, NULL);
*/
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize,
0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
++frameCount;
/* uint8_t == unsigned 8 bits == jboolean */
jbyteArray nativePixels = (*env)->NewByteArray(env, numBytes);
(*env)->SetByteArrayRegion(env, nativePixels, 0, numBytes, buffer);
return nativePixels;
}
}
av_free_packet(&packet);
}
return NULL;
}
/*****************************************************/
/* / functional call */
/*****************************************************/
jstring
Java_FFmpeg_play( JNIEnv* env, jobject thiz, jstring jfilePath )
{
INFO("--- Play");
char* filePath = (char *)(*env)->GetStringUTFChars(env, jfilePath, NULL);
RE(filePath);
/*****************************************************/
AVFormatContext *pFormatCtx;
int i, videoStream;
AVCodecContext *pCodecCtx;
AVCodec *pCodec;
AVFrame *pFrame;
AVPacket packet;
int frameFinished;
float aspect_ratio;
struct SwsContext *img_convert_ctx;
INFO(filePath);
/* FFmpeg */
av_register_all();
if(av_open_input_file(&pFormatCtx, filePath, NULL, 0, NULL)!=0)
RE("failed av_open_input_file ");
if(av_find_stream_info(pFormatCtx)<0)
RE("failed av_find_stream_info");
videoStream=-1;
for(i=0; inb_streams; i++)
if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
videoStream=i;
break;
}
if(videoStream==-1)
RE("failed videostream == -1");
pCodecCtx=pFormatCtx->streams[videoStream]->codec;
pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec==NULL) {
RE("Unsupported codec!");
}
if(avcodec_open(pCodecCtx, pCodec)<0)
RE("failed codec_open");
pFrame=avcodec_alloc_frame();
/* /FFmpeg */
INFO("codec name:");
INFO(pCodec->name);
INFO("Getting into stream decode:");
/* video stream */
i=0;
while(av_read_frame(pFormatCtx, &packet)>=0) {
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
++i;
INFO("frame finished");
AVPicture pict;
/*
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height,
PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize,
0, pCodecCtx->height, pict.data, pict.linesize);
*/
}
}
av_free_packet(&packet);
}
/* /video stream */
av_free(pFrame);
avcodec_close(pCodecCtx);
av_close_input_file(pFormatCtx);
RE("end of main");
}I can’t able to get the frames from Network camera
And give some idea to implement the live stream client for DVR camera in Android
-
Catalyst Open Source Academy
16 janvier 2015, par Matthieu Aubry — CommunityThe Open Source Academy is an initiative designed to provide training and work experience for young New Zealand technologists. Catalyst organises the Academy to show young technologists how to participate in open source communities and to fully explore their passion for IT through freely available open source tools.
It has been running annually since 2011. We are proud that Piwik project could participate in the Academy again this year !
What students got done
It’s amazing what a few young students can get done in four days of participating in an open source project like Piwik ! They were able to quickly get started with Piwik, and continued to make useful contributions to the Piwik analytics platform.
New Darkness theme
Liam has created a new dark theme for Piwik called Darkness.
To create the theme, Liam had to improve Piwik core stylesheets and created this pull request : Reuse the LESS variable for white color across all stylesheets.
Accessibility improvements
We were lucky to spend time with Julius, a Catalyst employee who is blind. He showed us in great detail how difficult and time consuming it can be for a blind user to use Piwik. For example we noticed how complicated it was for Julius to navigate the menus, to get to the main content, and to use the calendar and the Website selector. During this presentation we also noticed that Piwik was not yet usable with the keyboard.
As a result of this session with Julius we got to work with the students to improve accessibility in Piwik.
List of accessibility improvements
All these pull requests were created by the students and have been successfully merged into Piwik :
- see the focus of menu items when tabbing through the page
- Tweaks to improve accessibility (section around graph, better tooltip, set language in html)
- Added h3 tags to widget headers
- when main Menu is focused, display submenu
- Add “Skip to content” feature when user presses tab key
- Improved accessibility by putting offscreen text in the widget title
To learn more about accessibility in Piwik check out this issue on our tracker.
Summary
Working with young students was fun and interesting. We were excited to see how much they got done in such a short time !
At Piwik and Piwik PRO we are committed to building the best open analytics platform, and we will continue to support students who want to take part in the Piwik adventure.
Be well,