
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (69)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (8048)
-
repeating process twice with different arguments each time
29 octobre 2019, par NCrusherI’m still quite new to coding and Swift. Forgive my doubtlessly clumsy code.
I am trying to run a process with ffprobe from the main resource bundle. Actually, I need to run it twice, each time with different arguments, once to get the duration of the audio file I’m inputting, and once to get the full output so that I can parse it for a tidbit of data that I can’t isolate in a single argument the way I can the file duration. (I have to do it twice because the full output doesn’t actually list the duration in seconds the way that I need.)
To avoid a lot of repetitive code, I’d like to do this in one function. This is what I have :
func ffprobeOperations() {
var probeArguments = [String]()
// get full ffprobe output to parse
let probeArguments1 = [
"-hide_banner",
"\(inputFilePath)"]
// get just file duration in seconds
let probeArguments2 = [
"-hide_banner",
"-v",
"0",
"-show_entries",
"format=duration",
"-of",
"compact=p=0:nk=1",
"\(inputFilePath)"]
var probePass: Int = 0
if probePass == 1 {
probeArguments = probeArguments2
} else if probePass == 0 {
probeArguments = probeArguments1
}
guard let launchPath = Bundle.main.path(forResource: "ffprobe", ofType: "") else { return }
do {
let probeTask: Process = Process()
probeTask.launchPath = launchPath
probeTask.arguments = probeArguments
probeTask.standardInput = FileHandle.nullDevice
let pipe = Pipe()
probeTask.standardError = pipe
probeTask.standardOutput = pipe
let outHandle = pipe.fileHandleForReading
outHandle.waitForDataInBackgroundAndNotify()
var obs1 : NSObjectProtocol!
obs1 = NotificationCenter.default.addObserver(forName: NSNotification.Name.NSFileHandleDataAvailable,
object: outHandle, queue: nil) { notification -> Void in
let data = outHandle.availableData
if data.count > 0 {
if let str = NSString(data: data, encoding: String.Encoding.utf8.rawValue) {
self.ffmpegLogOutput.string += ("\(str)")
let range = NSRange(location:self.ffmpegLogOutput.string.count,length:0)
self.ffmpegLogOutput.scrollRangeToVisible(range)
}
outHandle.waitForDataInBackgroundAndNotify()
} else {
print("EOF on stderr from process")
NotificationCenter.default.removeObserver(obs1!)
}
}
var obs2 : NSObjectProtocol!
obs2 = NotificationCenter.default.addObserver(forName: Process.didTerminateNotification,
object: probeTask, queue: nil) { notification -> Void in
print("terminated")
NotificationCenter.default.removeObserver(obs2!)
}
probeTask.launch()
probeTask.waitUntilExit()
probePass += 1
}
}But no matter where I position
probePass += 1
in the function, XCode still gives me a warning that the conditionprobePass == 1
will never be true and thus the pass with the second set of arguments will never be executed.Where should I place
probePass += 1
, or is there a better way to do this altogether ? -
how to resume ffmpeg streamer for network disconnection for a period of time but eventually recovery
30 septembre 2019, par DJI_loverI can stream my video well by using ffmpeg’s libav* when network is good,but I can not guarantee that the network will always be good.I hope that when the network is bad, even disconnect for a period of time, the streaming process can resume and continue to stream video when network recover. I have tried, when I was streaming the video, I manually disconnected the network, the streamer stopped quickly. After 20 seconds, I resumed the network, the process resumed to stream as well in a few seconds.But if I disconnect the network for five minutes or longer, when the network is restored, the process blocked.Dose someone knows how to resume the streamer instread of blocking no matter how long the network is disconnected but eventually recovered(not by restarting the program).Could you give me some solution if you know ,thanks .
I have googled the solution,but can’t find something useful.So I come here to ask for help
static int encode_and_write_frame(AVCodecContext *codec_ctx, AVFormatContext *fmt_ctx, AVFrame *frame)
{
AVPacket pkt = {0};
av_init_packet(&pkt);
int ret = avcodec_send_frame(codec_ctx, frame);
if (ret < 0)
{
fprintf(stderr, "Error sending frame to codec context!\n");
return ret;
}
ret = avcodec_receive_packet(codec_ctx, &pkt);
//pkt.dts = pkt.pts;
if (ret < 0)
{
fprintf(stderr, "Error receiving packet from codec context!\n" );
return ret;
}
av_interleaved_write_frame(fmt_ctx, &pkt);
av_packet_unref(&pkt);
return 0;
}
void Streamer::stream_frame(const cv::Mat &image)
{
if(can_stream()) {
const int stride[] = {static_cast<int>(image.step[0])};
sws_scale(scaler.ctx, &image.data, stride, 0, image.rows, picture.frame->data, picture.frame->linesize);
picture.frame->pts += av_rescale_q(1, out_codec_ctx->time_base, out_stream->time_base);
//picture.frame->dts = picture.frame->pts;
encode_and_write_frame(out_codec_ctx, format_ctx, picture.frame);
}
}
while(ok) {
process_frame(read_frame, proc_frame);
if(!from_camera) {
streamer.stream_frame(proc_frame);
} else {
streamer.stream_frame(proc_frame, frame_time.count()*streamer.inv_stream_timebase);
}
ok = video_capture.read(read_frame);
time_prev = time_stop;
}
</int> -
How can I get start time of rtsp-sesson via ffmpeg (C++) ? start_time_realtime always equal -9223372036854775808
5 août 2019, par chuchuchuI’m trying to get a frame by rtsp and calculate its real-world timestamp. I previously used Live555 for this (presentationTime).
As far as I understand, ffmpeg does not provide such functionality, but provides the ability to read the relative time of each frame and the start time of the stream. In my case, the frame timestamps (pts) works correctly, but the stream start time (start_time_realtime) is always -9223372036854775808.
I’m trying to use simple example from this Q : https://stackoverflow.com/a/11054652/5355846
Value does not change. regardless of the position in the code
int main(int argc, char** argv) {
// Open the initial context variables that are needed
SwsContext *img_convert_ctx;
AVFormatContext* format_ctx = avformat_alloc_context();
AVCodecContext* codec_ctx = NULL;
int video_stream_index;
// Register everything
av_register_all();
avformat_network_init();
//open RTSP
if (avformat_open_input(&format_ctx, "path_to_rtsp_stream",
NULL, NULL) != 0) {
return EXIT_FAILURE;
}
...
}while (av_read_frame(format_ctx, &packet) >= 0 && cnt < 1000) { //read ~ 1000 frames
//// here!
std::cout<< " ***** "
<< std::to_string(format_ctx->start_time_realtime)
<< " | "<start_time
<< " | "<best_effort_timestamp;
...
}***** -9223372036854775808 | 0 | 4120 | 40801 Frame : 103
What am I doing wrong ?