
Recherche avancée
Autres articles (48)
-
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)
Sur d’autres sites (8701)
-
ffplay cannot play more than one song
5 février 2020, par Bernie gachi have taken ffplay.c file from http://ffmpeg.org/doxygen/trunk/ffplay_8c-source.html and re edited it to a cpp file to embed in my win32 gui application . i have made the following changes to it.
- made the int main function into a local function as follows, i can pass the HWND to embedd the player
void Ffplay::play_song(string file, HWND parent, bool* successfull)
{
int flags;
VideoState* is;
input_filename = file;
/* register all codecs, demux and protocols */
#if CONFIG_AVDEVICE
avdevice_register_all();
#endif
//avformat_network_init();
//check whether the filename is valid
if (input_filename.empty())
{
logger.log(logger.LEVEL_ERROR, "filename %s is not valid\n", file);
return;
}
if (display_disable)
{
video_disable = 1;
}
flags = SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER;
if (audio_disable)
flags &= ~SDL_INIT_AUDIO;
else
{
/* Try to work around an occasional ALSA buffer underflow issue when the
* period size is NPOT due to ALSA resampling by forcing the buffer size. */
if (!SDL_getenv("SDL_AUDIO_ALSA_SET_BUFFER_SIZE"))
SDL_setenv("SDL_AUDIO_ALSA_SET_BUFFER_SIZE", "1", 1);
}
if (display_disable)
flags &= ~SDL_INIT_VIDEO;
SDL_SetMainReady();
if (SDL_Init(flags))
{
logger.log(logger.LEVEL_ERROR, "Could not initialize SDL - %s\n", SDL_GetError());
logger.log(logger.LEVEL_ERROR, "(Did you set the DISPLAY variable?)\n");
return;
}
//Initialize optional fields of a packet with default values.
//Note, this does not touch the data and size members, which have to be initialized separately.
av_init_packet(&flush_pkt);
flush_pkt.data = (uint8_t*)&flush_pkt;
if (!display_disable)
{
int flags = SDL_WINDOW_HIDDEN;
if (alwaysontop)
#if SDL_VERSION_ATLEAST(2,0,5)
flags |= SDL_WINDOW_ALWAYS_ON_TOP;
#else
logger.log(logger.LEVEL_INFO, "SDL version doesn't support SDL_WINDOW_ALWAYS_ON_TOP. Feature will be inactive.\n");
#endif
if (borderless)
flags |= SDL_WINDOW_BORDERLESS;
else
flags |= SDL_WINDOW_RESIZABLE;
SDL_InitSubSystem(flags);
ShowWindow(parent, true);
//window = SDL_CreateWindow(program_name, SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, default_width, default_height, flags);
window = SDL_CreateWindowFrom(parent);
SDL_SetHint(SDL_HINT_RENDER_SCALE_QUALITY, "linear");
if (window) {
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
if (!renderer)
{
logger.log(logger.LEVEL_ERROR, "Failed to initialize a hardware accelerated renderer: %s\n", SDL_GetError());
renderer = SDL_CreateRenderer(window, -1, 0);
}
if (renderer)
{
if (!SDL_GetRendererInfo(renderer, &renderer_info))
{
logger.log(logger.LEVEL_INFO, "Initialized %s renderer.\n", renderer_info.name);
}
}
}
if (!window || !renderer || !renderer_info.num_texture_formats)
{
logger.log(logger.LEVEL_ERROR, "Failed to create window or renderer: %s\n", SDL_GetError());
return;
}
}
is = stream_open(input_filename.c_str(), file_iformat);
if (!is)
{
logger.log(logger.LEVEL_ERROR, "Failed to initialize VideoState!\n");
return;
}
//the song is playing now
*successfull = true;
event_loop(is);
//the song has quit;
*successfull = false;
}- changed the callback functions as the static ones couldn’t be used by c++ eg,
void Ffplay::static_sdl_audio_callback(void* opaque, Uint8* stream, int len)
{
static_cast(opaque)->sdl_audio_callback(opaque, stream, len);
}closing doesn’t change from the main file to close the audio and sdl framework
void Ffplay::do_exit(VideoState* is)
{
abort = true;
if(is)
{
stream_close(is);
}
if (renderer)
SDL_DestroyRenderer(renderer);
if (window)
SDL_DestroyWindow(window);
#if CONFIG_AVFILTER
av_freep(&vfilters_list);
#endif
avformat_network_deinit();
SDL_Quit();
}i call the functions as follows from main gui
ft=std::async(launch::async, &Menu::play_song, this, songs_to_play.at(0));
the
menu::play_song
function is :void Menu::play_song(wstring song_path)
{
ready_to_play_song = false;
OutputDebugString(L"\nbefore song\n");
using std::future;
using std::async;
using std::launch;
string input{ song_path.begin(),song_path.end() };
Ffplay ffplay;
ffplay.play_song(input, h_sdl_window, &song_opened);
OutputDebugString(L"\nafter song\n");
ready_to_play_song = true;
}THE PROBLEM is i can only play one song . if i call the
menu::play_song
function again the sound is missing and the video/art cover is occasionally missing also. it seems some resources are not been released or something like that.i have localised the proble to this function
int Ffplay::packet_queue_get(PacketQueue* q, AVPacket* pkt, int block, int* serial)
{
MyAVPacketList* pkt1;
int ret;
int count=0;
SDL_LockMutex(q->mutex);
for (;;)
{
if (q->abort_request)
{
ret = -1;
break;
}
pkt1 = q->first_pkt;
if (pkt1) {
q->first_pkt = pkt1->next;
if (!q->first_pkt)
q->last_pkt = NULL;
q->nb_packets--;
q->size -= pkt1->pkt.size + sizeof(*pkt1);
q->duration -= pkt1->pkt.duration;
*pkt = pkt1->pkt;
if (serial)
*serial = pkt1->serial;
av_free(pkt1);
ret = 1;
break;
}
else if (!block) {
ret = 0;
break;
}
else
{
logger.log(logger.LEVEL_INFO, "packet_queue before");
SDL_CondWait(q->cond, q->mutex);
logger.log(logger.LEVEL_INFO, "packet_queue after");
}
}
SDL_UnlockMutex(q->mutex);
return ret;
}the call to
SDL_CondWait(q->cond, q->mutex);
never returns -
Start of video is not labeled as "0" in QuickTime Video from GoPro
26 mars 2020, par John TerragnoliI’m trying to combine four GoPro videos into a single video, and then rotate it 90 degrees. However, the time scales on the bottom of the videos are all wrong. The videos are 17 minutes and 42 second. But the beginning time is labeled as 5:15:20:32 and the ending time is 5:33:01:32. It just looks really weird and I’d like to fix it. After I use ffmpeg to rotate and concatenate the videos, the problem persists. Could it possibly be fixed with Exiftool ?
ffmpeg -safe 0 -f concat -i list.txt -vcodec copy -acodec copy merged_videos.MP4
ffmpeg -i input.mov -vf "transpose=1" output.mov
Here is the exiftool information on one of the videos :
File Name : GOPR3023.MP4
Directory : .
File Size : 3.7 GB
File Modification Date/Time : 2018:04:12 14:56:16-05:00
File Access Date/Time : 2020:03:25 12:17:18-05:00
File Inode Change Date/Time : 2020:03:25 17:57:04-05:00
File Permissions : rwxrwxrwx
File Type : MP4
File Type Extension : mp4
MIME Type : video/mp4
Major Brand : MP4 v1 [ISO 14496-1:ch13]
Minor Version : 2013.10.18
Compatible Brands : mp41
Movie Data Size : 4001979951
Movie Data Offset : 28
Movie Header Version : 0
Create Date : 2018:04:12 14:38:32
Modify Date : 2018:04:12 14:38:32
Time Scale : 60000
Duration : 0:17:42
Preferred Rate : 1
Preferred Volume : 100.00%
Preview Time : 0 s
Preview Duration : 0 s
Poster Time : 0 s
Selection Time : 0 s
Selection Duration : 0 s
Current Time : 0 s
Next Track ID : 6
Firmware Version : HD5.03.02.51.00
Lens Serial Number : NAH6092300301117
Camera Serial Number Hash : 34676f2cdf49b86a1514817a93377bf7
Track Header Version : 0
Track Create Date : 2018:04:12 14:38:32
Track Modify Date : 2018:04:12 14:38:32
Track ID : 1
Track Duration : 0:17:42
Track Layer : 0
Track Volume : 0.00%
Image Width : 1920
Image Height : 1080
Graphics Mode : srcCopy
Op Color : 0 0 0
Compressor ID : avc1
Source Image Width : 1920
Source Image Height : 1080
X Resolution : 72
Y Resolution : 72
Compressor Name : GoPro AVC encoder
Bit Depth : 24
Color Representation : nclx 1 1 1
Video Frame Rate : 59.94
Time Code : 3
Balance : 0
Audio Format : mp4a
Audio Channels : 2
Audio Bits Per Sample : 16
Audio Sample Rate : 48000
Text Font : Unknown (21)
Text Face : Plain
Text Size : 10
Text Color : 0 0 0
Background Color : 65535 65535 65535
Font Name : Helvetica
Other Format : tmcd
Warning : [minor] The ExtractEmbedded option may find more tags in the movie data
Matrix Structure : 1 0 0 0 1 0 0 0 1
Media Header Version : 0
Media Create Date : 2018:04:12 14:38:32
Media Modify Date : 2018:04:12 14:38:32
Media Time Scale : 60000
Media Duration : 0:17:42
Handler Class : Media Handler
Handler Type : NRT Metadata
Handler Description : GoPro SOS
Gen Media Version : 0
Gen Flags : 0 0 0
Gen Graphics Mode : srcCopy
Gen Op Color : 0 0 0
Gen Balance : 0
Meta Format : fdsc
Image Size : 1920x1080
Megapixels : 2.1
Avg Bitrate : 30.1 Mbps
Rotation : 0Part 2
There is a pretty obvious "stutter" at the 17:42 mark where the two clips are combined. I’ve tried using ffmpeg and iMovie, but both give the same results. The GoPro broke up the event into multiple clips on it’s own so it seems weird that there would be any information missing. Is there any way to get rid of this stutter ?Thanks !
-
how to make cloud services for camera device iot monitoring and control
16 novembre 2019, par guardian presenceLooking to use ffmpeg to pull video trsp stream over tcp pass the video feeds to opencv for object recognition and tracking passing the output to ardunio serial control systems like alarm systems drones ardunio controlled lawn mores.
Lets say some one has a cctv system and a drone laying around if they connect to my cloud the system can pull the stream from the cctv cameras pull the stream from the drone if object moves in on camera zone the cloud can control the drone to move to that zone and take a closer look at a face if face detected send alert return drone to docking . if ptz camera opencv tracking zooming to face
if ros can turn into a cloud service need to know about hosting and
building front end user login and device login.i’m a cctv installer new to programming and want to setup cloud login for camera iot devises were users can control and monitoring of devices from a central cloud