
Recherche avancée
Autres articles (32)
-
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users.
Sur d’autres sites (5245)
-
A nice comparison of mics
3 juin 2010DVEStore has done a great comparison of different types of microphones on video. Audio is a black art, and folks rarely put in the time to do A/B/C comparisons. We tend to just default to a set of mics that we’ve decided are "good enough" and then don’t go back to reevaluate.
-
Array Getting Lost While Passing From C++ to C Using Callback [duplicate]
23 décembre 2020, par Abhishek SharmaI am trying to write Video Using FFmpeg by generating frame at run time using direct3d and the frames are generated using sharp dx at c# and I use window runtime to callback to c# to generate Frame And return Platform::Array of Byte ;


so for writing video using FFmpeg, I used C code that writes video, and to ask for generating frame I implemented a callback to generate a frame and all that in StaticLib


uint8_t*(*genrate_frame_callback)(int) = NULL; 



now in C-File, I call fill_image to get the frame and write to the video


static void fill_image(int frame_index, int width, int height)
{
 int x, y, i;

 i = frame_index;
 auto result = genrate_frame_callback(frame_index);// after passing this point while using debugger reults single element not even array
 .
 .
 .
 code to write video
 
}



now when I call to Write Video before that pass this function to the callback that is in the c++ file in a Window Runtime Component that reference to static lib


uint8_t* genrate_frame(int args)
{
 auto frame = FireGenrateFramet(args); // returns Platform::Array<byte>
 vector v(frame->begin(), frame->end());
 return v.data();// data is abilabe to this point 
}
</byte>


now the result variable contains a single element
I am new to c++ and c and unable to understand why is data not passed to the function using call back


Edit :


then can you help me with how to pass Data I tried using the global Scope Variable c++ file too but still,
it gets lost,
but after introducing another call back to read data stored in global Variable it read the whole data correctly


vector frame_v;
uint8_t* genrate_frame(int args)
{
 auto frame = FireGenrateFrame(args);
 vector v(frame->begin(), frame->end());
 frame_v = v;
 return v.data();// this loose the Same 
}

uint8_t read_pixal(int args)
{
 return frame_v[args];// where as it read out correctly
}



but I don't want to store and add new call back an read from their just pass the array


-
ffmpeg concat drops audio frames
5 octobre 2017, par ShaunI have an mp4 file and I want to take two sequential sections of the video out and render them as individual files, later recombining them back into the original video. For instance, with my video
video.mp4
, I can runffmpeg -i video.mp4 -ss 56 -t 4 out1.mp4
ffmpeg -i video.mp4 -ss 60 -t 4 out2.mp4creating
out1.mp4
which contains 00:00:56 to 00:01:00 ofvideo.mp4
, andout2.mp4
which contains 00:01:00 to 00:01:04. However, later I want to be able to recombine them again quickly (i.e., without reencoding), so I use the concat demuxer,ffmpeg -f concat -safe 0 -i files.txt -c copy concat.mp4
where
files.txt
containsfile out1.mp4
file out2.mp4which theoretically should give me back 00:00:56 to 00:01:04 of
video.mp4
, however there are always dropped audio frames where the concatenation occurs, creating a very unpleasant sound artifact, an audio blip, if you will.I have tried using
async
and-af apad
on initially creating the two sections of the video but I am still faced with the same problem, and have not found the solution elsewhere. I have experienced this issue in multiple different use cases, so hopefully this simple example will shed some light on the real problem.