
Recherche avancée
Autres articles (100)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (12031)
-
Parsing avconv/ffmpeg rawvideo output ?
23 avril 2013, par DigitalManI'm about to begin a project that will involve working with the output of avconv/ffmpeg, pixel-by-pixel, in rgb32 format. I intend to work with a raw byte stream, such as from the
pipe
protocol. Basic pointer arithmetic (C/C++) will be used to iterate over these pixels, and modify them in arbitrary manners in real-time.I've created a very small file using rawvideo format and codec, and opened it up in a hex editor. As expected, it's just a series of pixels, read right to left, top to bottom. No distinguishing between lines - no problem, if you know how wide the video is beforehand. No distinguishing between frames - no problem, if you also know how tall the video is. No file header for frame rate, or even what the encoding (rgb32, rgb24, yuv, etc.) is - again, as long as you already know, it can be worked with.
The problem occurs when - for one reason or another - some bytes are missing. Maybe the stream isn't being examined from the beginning, which is likely be the case in my project, or maybe something just got lost. All the pre-existing knowledge in the world (besides maybe a byte count of what's been missed, not gonna happen) won't prevent it from happily chugging along, with an incorrect offset of line and frame.
So, what I'm looking for is an option for rawvideo, or possibly some other format/codec, that will allow me to work with the resulting stream at the pixel level, in RGB, yet still have a clear definition of where a new frame begins, even if it happens to start "looking" in the middle of a frame. (Width, height, and framerate will indeed be known.)
-
Error to encode audio though out the video in FFmpegFrameRecorder
28 mars 2016, par Ragghwendra SuryawanshiHello Stack World,
I am trying to crate video from images in android using FFMpeg and
javacv. I am able to crate video from images with out audio, when i try
the same thing with audio video is crated but audio is just for 1
sec of the videoFFmpegFrameRecorder myFFmpegFrameRecorder = new FFmpegFrameRecorder(new StringBuilder(String.valueOf(strPath)).append("/").append(this.FileName).toString(), 640, 480, frameGrabber.getAudioChannels());
myFFmpegFrameRecorder.setVideoCodec(13);
myFFmpegFrameRecorder.setFormat("mp4");
myFFmpegFrameRecorder.setPixelFormat(0);
myFFmpegFrameRecorder.setSampleFormat(frameGrabber.getSampleFormat());
myFFmpegFrameRecorder.setSampleRate(44100);
myFFmpegFrameRecorder.setFrameRate(1.0d);
myFFmpegFrameRecorder.setVideoBitrate(AccessibilityEventCompat.TYPE_TOUCH_INTERACTION_START);
myFFmpegFrameRecorder.setAudioCodec(avcodec.AV_CODEC_ID_MP3);
boolean isAudioFinish = false;
try {
frameGrabber.start();
IplImage iplimage = new IplImage();
myFFmpegFrameRecorder.start();
for (int i = 0; i <= imgname - 1; i++) {
for (int j = 0; j <= 6; j++) {
Frame frame = frameGrabber.grabFrame();
if (frame != null) {
myFFmpegFrameRecorder.record(frame);
}
long l1 = 1000L * (System.currentTimeMillis() - l);
if (l1 < myFFmpegFrameRecorder.getTimestamp()) {
l1 = 1000L + myFFmpegFrameRecorder.getTimestamp();
}
myFFmpegFrameRecorder.setTimestamp(l1);
}
iplimage = opencv_highgui.cvLoadImage(myObjects.get(i).toString());
myFFmpegFrameRecorder.record(iplimage);
opencv_core.cvReleaseImage(iplimage);
}
myFFmpegFrameRecorder.stop();
frameGrabber.stop();
} catch (Exception e) {
e.printStackTrace();
}Please help me to solve it. I am missing something due to which it not working out here. I have read doc of the FFmpegFrameRecorder but unable to find my error.
-
How can I get start time of rtsp-sesson via ffmpeg (C++) ? start_time_realtime always equal -9223372036854775808
5 août 2019, par chuchuchuI’m trying to get a frame by rtsp and calculate its real-world timestamp. I previously used Live555 for this (presentationTime).
As far as I understand, ffmpeg does not provide such functionality, but provides the ability to read the relative time of each frame and the start time of the stream. In my case, the frame timestamps (pts) works correctly, but the stream start time (start_time_realtime) is always -9223372036854775808.
I’m trying to use simple example from this Q : https://stackoverflow.com/a/11054652/5355846
Value does not change. regardless of the position in the code
int main(int argc, char** argv) {
// Open the initial context variables that are needed
SwsContext *img_convert_ctx;
AVFormatContext* format_ctx = avformat_alloc_context();
AVCodecContext* codec_ctx = NULL;
int video_stream_index;
// Register everything
av_register_all();
avformat_network_init();
//open RTSP
if (avformat_open_input(&format_ctx, "path_to_rtsp_stream",
NULL, NULL) != 0) {
return EXIT_FAILURE;
}
...
}while (av_read_frame(format_ctx, &packet) >= 0 && cnt < 1000) { //read ~ 1000 frames
//// here!
std::cout<< " ***** "
<< std::to_string(format_ctx->start_time_realtime)
<< " | "<start_time
<< " | "<best_effort_timestamp;
...
}***** -9223372036854775808 | 0 | 4120 | 40801 Frame : 103
What am I doing wrong ?