
Recherche avancée
Médias (1)
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (97)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Use, discuss, criticize
13 avril 2011, parTalk to people directly involved in MediaSPIP’s development, or to people around you who could use MediaSPIP to share, enhance or develop their creative projects.
The bigger the community, the more MediaSPIP’s potential will be explored and the faster the software will evolve.
A discussion list is available for all exchanges between users. -
MediaSPIP Player : problèmes potentiels
22 février 2011, parLe lecteur ne fonctionne pas sur Internet Explorer
Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)
Sur d’autres sites (6840)
-
Decode audio using ffmpeg (packet-by-packet) in Java
27 mai 2022, par quad478In my application, I receive an audio stream from an IP-camera via RTP using NETTY.
The stream from IP-camera comes in the "G.711 mulaw" format, I would like to recode it to the "AAC" format.
I can't use files in this task as it's a live stream, and each packet needs to be decoded and delivered to the client (browser) immediately.
For this task, I wanted to use the ffmpeg child process :
when connecting to the camera, create a ffmpeg process and send to stdin (ffmpeg) each received packet, then I would like to receive the decoded packet from stdout.
Here is the command I run ffmeg with :


"ffmpeg.exe -f mulaw -re -i - -f adts -"



I'm not sure if "-re" should be used here, but without this option, ffmpeg outputs the decode result only after stdin is closed and the process exits.
The problem is that I don't get anything on stdout after sending the packet to ffmpeg.


Decoder code :


package ru.ngslab.insentry.web.video.protocols.rtp;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToMessageDecoder;

import java.io.Closeable;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

public class RtpFfmpegDecoder extends MessageToMessageDecoder<rtppacket> implements Closeable {

 private final Process ffmegProcess;
 private final OutputStream ffmpegOutput;
 private final InputStream ffmegInput;
 private final ExecutorService ffmpegInputReaderService = Executors.newSingleThreadExecutor();

 public RtpFfmpegDecoder() {

 //Start Ffmpeg process
 ProcessBuilder ffmpegBuilder = new ProcessBuilder("ffmpeg.exe", "-f", "mulaw",
 "-re", "-i", "-", "-f", "adts", "-").redirectError(ProcessBuilder.Redirect.INHERIT);
 try {
 ffmegProcess = ffmpegBuilder.start();
 ffmpegOutput = ffmegProcess.getOutputStream();
 ffmegInput = ffmegProcess.getInputStream();
 } catch (IOException e) {
 throw new IllegalStateException(e);
 }
 }

 @Override
 protected void decode(ChannelHandlerContext channelHandlerContext, RtpPacket rtpPacket, List list) throws Exception {

 //start read ffmpeg output in another thread
 Future future = ffmpegInputReaderService.submit(this::startReadFFmpegOutput);

 //write rtp- packet bytes to ffmpeg-input
 ByteBuf data = rtpPacket.getData();
 byte[] rtpData = new byte[data.readableBytes()];
 data.getBytes(data.readerIndex(), rtpData);
 ffmpegOutput.write(rtpData);
 ffmpegOutput.flush();

 //waiting here for the decoding result from ffmpeg
 //blocks here
 byte[] result = future.get();
 //then process result...
 }

 private byte[] startReadFFmpegOutput() {
 try {
 //I don't know how many bytes to expect here, for test purposes I use 1024
 var bytes = new byte[1024];
 ffmegInput.read(bytes);
 return bytes;
 } catch (IOException e) {
 throw new IllegalStateException(e);
 }
 }

 @Override
 public void close() throws IOException {
 //Close streams code...
 }
}
</rtppacket>


This doesn't work because ffmpeg doesn't send anything after receiving the packet.
No errors in log, no output data.
Just wait for result here :


byte[] result = future.get();



Normally, ffmpeg only outputs after stdin is closed and the process stops.
It may be necessary to run ffmpeg with some special
parameters so that it outputs each received packet at once ?


I would be very grateful for any help


-
Build ffmpeg on a build machine
18 juillet 2019, par RDIBuild ffmpeg on build PC using libx264 and shared libraries (not static).
I am building on a Red Hat 6.6 Server and final target machine is CentOS 6.6.
I am trying, as said, to build ffmpeg with encoding enabled (with libx264) and shared libraries ; of course I do not want to install the libraries on the build PC, they should be only extracted and then delivered together with the final RPM.
After the "./configure" I get all RPMs (related to ffmpeg) but when trying to installing ffmpeg-libs on the build pc it fails because the libx264.so.157 is not found, even if as test I installed it (configure/make/make install) and present at /usr/local/lib.Where am I wrong ?
Thanks
This is my SPEC file at the moment :
ldconfig /usr/local/lib
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
# configure
./configure \
--enable-gpl --disable-static --enable-shared --extra-cflags="-I/usr/local/include" --extra-ldflags="-L/usr/local/lib" --extra-libs=-ldl --disable-autodetect --disable-doc --disable-postproc --disable-ffplay --disable-everything --enable-encoder=aac --enable-encoder=png --enable-encoder=mjpeg --enable-encoder=libx264 --enable-decoder=aac --enable-decoder=h264 --enable-decoder=mpeg4 --enable-decoder=rawvideo --enable-decoder=png --enable-muxer=mp4 --enable-muxer=stream_segment --enable-muxer=image2 --enable-demuxer=aac --enable-demuxer=h264 --enable-demuxer=mov --enable-demuxer=rtp --enable-parser=aac --enable-parser=h264 --enable-parser=mpeg4video --enable-bsf=aac_adtstoasc --enable-protocol=file --enable-protocol=http --enable-protocol=tcp --enable-protocol=rtp --enable-protocol=udp --enable-indev=xcbgrab --disable-alsa --enable-libxcb --enable-libxcb-xfixes --enable-libxcb-shape --enable-zlib --prefix=%{_prefix} --bindir=%{_bindir} --datadir=%{_datadir}/%{name} --shlibdir=%{_libdir} --enable-alsa --enable-avfilter --enable-avresample --enable-libx264 --enable-filter=scale \ -
Applying same filter_complex many times before output [duplicate]
19 août 2019, par FabiánThis question already has an answer here :
It’s not a duplicate. This is about using
filter_complex
, not -vf.In my video there’s an object that has shades of yellow (more orange-like) and a solid yellow as background.
I need to output all frames into a png sequence, using a color key filter to replace the yellow from the background :
ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.125:0[ckout]" -map "[ckout]" colorkey-%d.png
This removes the specific color, but leaves some pints behind, and some items are yellow-themed, so blending value is a no-no for this scenario.
I need to get rid of 4 specific yellow-colors from the frames :
0xfff31b
,0xfae56b
,0xfaec46
and0xeee2a0
, and I plan to run the same filter for specific colors before getting the final result.So first I tried this :
ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.4:0[ckout1];[0:v]colorkey=0xfae56b:0.4:0[ckout2];[0:v]colorkey=0xfaec46:0.4:0[ckout3];[0:v]colorkey=0xeee2a0:0.4:0[ckout4]" -map "[ckout4]" colorkeyrefined-%d.png
Then this :
ffmpeg -ss 4 -i original.mp4 -t 2 -filter_complex "[0:v]colorkey=0xfff31b:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xfae56b:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xfaec46:0.4:0[ckout]" -filter_complex "[0:v]colorkey=0xeee2a0:0.4:0[ckout]" -map "[ckout]" colorkeyrefined-%d.png
But both display the same error :
Filter colorkey has an unconnected output.
Is there a way to apply the colorkey feature 4 times (with the mentioned values) in one go ?