
Recherche avancée
Autres articles (54)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
L’utiliser, en parler, le critiquer
10 avril 2011La première attitude à adopter est d’en parler, soit directement avec les personnes impliquées dans son développement, soit autour de vous pour convaincre de nouvelles personnes à l’utiliser.
Plus la communauté sera nombreuse et plus les évolutions seront rapides ...
Une liste de discussion est disponible pour tout échange entre utilisateurs. -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)
Sur d’autres sites (9792)
-
Adding and reading timestamp to an image using ffmpeg
17 novembre 2013, par Andrew SimpsonI am not sure if i should be posting this type of question here and I have not uploaded any code as I am talking about concepts.
I have a C# winform desktop application.
It produces a flow of jpegs that have derived from a motion detection algorithm.
I use the Graphics.DrawText to add a time stamp directly onto the image when it was created.
I then use ffmpeg to produce a ogg video file.
When i play the video file back I obviously see the image with the time stamp.
What I would like to be able to do is read in code the time stamp that is on every image. I had thought of using some sort of OCR to do this but it seems overkill to me. I also have considered creating a separate text file that acts as the 'index' of the video file. But then I have to manage the 'transaction' between 2 different files.
Corruption to the text file could happen.
I would like a way of encapsulating this information to be easily read by my C# code.
Has anyone had any experience doing this ? Any recommendations pls ?
Many Thanks
-
How to build and link FFMPEG to iOS ?
30 juin 2015, par Alexander Tkachenkoall !
I know, there are a lot of questions here about FFMPEG on iOS, but no one answer is appropriate for my case :(
Something strange happens each case when I am trying to link FFMPEG in my project, so please, help me !My task is to write video-chat application for iOS, that uses RTMP-protocol for publishing and reading video-stream to/from custom Flash Media Server.
I decided to use rtmplib, free open-source library for streaming FLV video over RTMP, as it is the only appropriate library.
Many problem appeared when I began research of it, but later I understood how it should work.
Now I can read live stream of FLV video(from url) and send it back to channel, with the help of my application.
My trouble now is in sending video FROM Camera.
Basic operations sequence, as I understood, should be the following :-
Using AVFoundation, with the help of sequence (Device-AVCaptureSession-AVVideoDataOutput-> AVAssetWriter) I write this to a file(If you need, I can describe this flow more detailed, but in the context of question it is not important). This flow is necessary to make hardware-accelerated conversion of live video from the camera into H.264 codec. But it is in MOV container format. (This is completed step)
-
I read this temporary file with each sample written, and obtain the stream of bytes of video-data, (H.264 encoded, in QuickTime container). (this is allready completed step)
-
I need to convert videodata from QuickTime container format to FLV. And it all in real-time.(packet - by - packet)
-
If i will have the packets of video-data, contained in FLV container format, I will be able to send packets over RTMP using rtmplib.
Now, the most complicated part for me, is step 3.
I think, I need to use ffmpeg lib to this conversion (libavformat). I even found the source code, showing how to decode h.264 data packets from MOV file (looking in libavformat, i found that it is possible to extract this packets even from byte stream, which is more appropriate for me). And having this completed, I will need to encode packets into FLV(using ffmpeg or manually, in a way of adding FLV-headers to h.264 packets, it is not problem and is easy, if I am correct).
FFMPEG has great documentation and is very powerfull library, and I think, there won’t be a problem to use it. BUT the problem here is that I can not got it working in iOS project.
I have spend 3 days reading documentation, stackoverflow and googling the answer on the question "How to build FFMPEG for iOS" and I think, my PM is gonna fire me if I will spend one more week on trying to compile this library :))
I tried to use many different build-scripts and configure files, but when I build FFMPEG, i Got libavformat, libavcodec, etc. for x86 architecture (even when I specify armv6 arch in build-script). (I use "lipo -info libavcodec.a" to show architectures)
So I cannot build this sources, and decided to find prebuilt FFMPEG, that is build for architecture armv7, armv6, i386.
I have downloaded iOS Comm Lib from MidnightCoders from github, and it contains example of usage FFMPEG, it contains prebuilt .a files of avcodec,avformat and another FFMPEG libraries.
I check their architecture :
iMac-2:MediaLibiOS root# lipo -info libavformat.a
Architectures in the fat file: libavformat.a are: armv6 armv7 i386And I found that it is appropriate for me !
When I tried to add this libraries and headers to xCode project, It compiles fine(and I even have no warnings like "Library is compiled for another architecture"), and I can use structures from headers, but when I am trying to call C-function from libavformat (av_register_all()), the compiler show me error message "Symbol(s) not found for architecture armv7 : av_register_all".I thought, that maybe there are no symbols in lib, and tried to show them :
root# nm -arch armv6 libavformat.a | grep av_register_all
00000000 T _av_register_allNow I am stuck here, I don’t understand, why xCode can not see this symbols, and can not move forward.
Please, correct me if I am wrong in the understanding of flow for publishing RTMP-stream from iOS, and help me in building and linking FFMPEG for iOS.
I have iPhone 5.1. SDK and xCode 4.2.
-
-
ffmpeg delay in decoding h264
19 mai 2020, par Mateen UlhaqNOTE : Still looking for an answer !



I am taking raw RGB frames, encoding them to h264, then decoding them back to raw RGB frames.



[RGB frame] ------ encoder ------> [h264 stream] ------ decoder ------> [RGB frame]
 ^ ^ ^ ^
 encoder_write encoder_read decoder_write decoder_read




I would like to retrieve the decoded frames as soon as possible. However, it seems that there is always a one-frame delay no matter how long one waits.¹ In this example, I feed the encoder a frame every 2 seconds :



$ python demo.py 2>/dev/null
time=0 frames=1 encoder_write
time=2 frames=2 encoder_write
time=2 frames=1 decoder_read <-- decoded output is delayed by extra frame
time=4 frames=3 encoder_write
time=4 frames=2 decoder_read
time=6 frames=4 encoder_write
time=6 frames=3 decoder_read
...




What I want instead :



$ python demo.py 2>/dev/null
time=0 frames=1 encoder_write
time=0 frames=1 decoder_read <-- decode immediately after encode
time=2 frames=2 encoder_write
time=2 frames=2 decoder_read
time=4 frames=3 encoder_write
time=4 frames=3 decoder_read
time=6 frames=4 encoder_write
time=6 frames=4 decoder_read
...




The encoder and decoder ffmpeg processes are run with the following arguments :



encoder: ffmpeg -f rawvideo -pix_fmt rgb24 -s 224x224 -i pipe: \
 -f h264 -tune zerolatency pipe:

decoder: ffmpeg -probesize 32 -flags low_delay \
 -f h264 -i pipe: \
 -f rawvideo -pix_fmt rgb24 -s 224x224 pipe:




Complete reproducible example below. No external video files needed. Just copy, paste, and run
python demo.py 2>/dev/null
!


import subprocess
from queue import Queue
from threading import Thread
from time import sleep, time
import numpy as np

WIDTH = 224
HEIGHT = 224
NUM_FRAMES = 256

def t(epoch=time()):
 return int(time() - epoch)

def make_frames(num_frames):
 x = np.arange(WIDTH, dtype=np.uint8)
 x = np.broadcast_to(x, (num_frames, HEIGHT, WIDTH))
 x = x[..., np.newaxis].repeat(3, axis=-1)
 x[..., 1] = x[:, :, ::-1, 1]
 scale = np.arange(1, len(x) + 1, dtype=np.uint8)
 scale = scale[:, np.newaxis, np.newaxis, np.newaxis]
 x *= scale
 return x

def encoder_write(writer):
 """Feeds encoder frames to encode"""
 frames = make_frames(num_frames=NUM_FRAMES)
 for i, frame in enumerate(frames):
 writer.write(frame.tobytes())
 writer.flush()
 print(f"time={t()} frames={i + 1:<3} encoder_write")
 sleep(2)
 writer.close()

def encoder_read(reader, queue):
 """Puts chunks of encoded bytes into queue"""
 while chunk := reader.read1():
 queue.put(chunk)
 # print(f"time={t()} chunk={len(chunk):<4} encoder_read")
 queue.put(None)

def decoder_write(writer, queue):
 """Feeds decoder bytes to decode"""
 while chunk := queue.get():
 writer.write(chunk)
 writer.flush()
 # print(f"time={t()} chunk={len(chunk):<4} decoder_write")
 writer.close()

def decoder_read(reader):
 """Retrieves decoded frames"""
 buffer = b""
 frame_len = HEIGHT * WIDTH * 3
 targets = make_frames(num_frames=NUM_FRAMES)
 i = 0
 while chunk := reader.read1():
 buffer += chunk
 while len(buffer) >= frame_len:
 frame = np.frombuffer(buffer[:frame_len], dtype=np.uint8)
 frame = frame.reshape(HEIGHT, WIDTH, 3)
 psnr = 10 * np.log10(255**2 / np.mean((frame - targets[i])**2))
 buffer = buffer[frame_len:]
 i += 1
 print(f"time={t()} frames={i:<3} decoder_read psnr={psnr:.1f}")

cmd = (
 "ffmpeg "
 "-f rawvideo -pix_fmt rgb24 -s 224x224 "
 "-i pipe: "
 "-f h264 "
 "-tune zerolatency "
 "pipe:"
)
encoder_process = subprocess.Popen(
 cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE
)

cmd = (
 "ffmpeg "
 "-probesize 32 "
 "-flags low_delay "
 "-f h264 "
 "-i pipe: "
 "-f rawvideo -pix_fmt rgb24 -s 224x224 "
 "pipe:"
)
decoder_process = subprocess.Popen(
 cmd.split(), stdin=subprocess.PIPE, stdout=subprocess.PIPE
)

queue = Queue()

threads = [
 Thread(target=encoder_write, args=(encoder_process.stdin,),),
 Thread(target=encoder_read, args=(encoder_process.stdout, queue),),
 Thread(target=decoder_write, args=(decoder_process.stdin, queue),),
 Thread(target=decoder_read, args=(decoder_process.stdout,),),
]

for thread in threads:
 thread.start()






¹ I did some testing and it seems the decoder is waiting for the next frame's NAL header
00 00 00 01 41 88
(in hex) before it decodes the current frame. One would hope that the prefix00 00 00 01
would be enough, but it also waits for the next two bytes !