
Recherche avancée
Médias (1)
-
Collections - Formulaire de création rapide
19 février 2013, par
Mis à jour : Février 2013
Langue : français
Type : Image
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (17620)
-
Connect external cameras to iOS and decompress to a usable form
27 septembre 2017, par Ping ChenI want to create a 2 camera setup which can send 1 of the camera views out as an RTMP stream depending on the motion intensity detected. The chosen camera view can change if motion intensity on the views changes.
I imagine that I could use an iPhone/iPad as encoding/streaming hub as well as 1 of the cameras. And connect a WiFi camera to the iPad/iPhone to feed the 2nd camera view.
My goals for the iOS side are :
- Connect with a WiFi camera on the local network
- Decode the data and run motion intensity detection on the WiFi camera feed AND the iPhone/iPad’s own camera feed with Brad Larson’s GPUImage framework https://github.com/BradLarson/GPUImage
- Stream out the chosen camera view. depending on motion detected
Larson’s GPUImage framework works with an AVCaptureSession subclass. I’m only familiar with AVFoundation objects, but am a complete noob with it comes to VideoToolbox and some of the lower level iOS video stuff. Through googling, I kind of know that VTDecompressionSession is what I’d get from the WiFi camera. I have no clue how I can manipulate that to a usable form for my purposes.
I’ve dug through stackoverflow answers such as : https://stackoverflow.com/a/29525001/7097455
Very informative, but maybe I don’t even know to ask the correct questions
-
bindProcessToNetwork is not working with ffmpeg in android
12 janvier 2020, par Mithun Sarker ShuvroI am using FFmpeg to re stream video from a camera(which is connected via wifi with no internet connection) to another server, and I want to do the re streaming process via cellular data. As I am already connected to wifi and to use cellular data at the same time
bindProcessToNetwork()
. Before executing ffmpeg command I have done the the followingfinal ConnectivityManager cm = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE);
NetworkRequest.Builder req = new NetworkRequest.Builder();
req.addCapability(NetworkCapabilities.NET_CAPABILITY_INTERNET);
req.addTransportType(NetworkCapabilities.TRANSPORT_CELLULAR);
cm.requestNetwork(req.build(), new ConnectivityManager.NetworkCallback() {
@Override
public void onAvailable(Network network) {
//here you can use bindProcessToNetwork
//cm.getNetworkInfo(network);
if (cm.getNetworkInfo(network).getType() == ConnectivityManager.TYPE_MOBILE) {
cm.bindProcessToNetwork(network);
}
}
});it is working fine in most of the case, like webview is working properly by using cellular data , while connected to wifi , but when I try to execute any ffmpeg command it doesn’t work .
-
FFMPEG : RTSP to HLS restream stops with "No more output streams to write to, finishing."
1er juin 2022, par Tim WI'm trying to do a live restream an RTSP feed from a webcam using ffmpeg, but the stream repeatedly stops with the error :



"No more output streams to write to, finishing."



The problem seems to get worse at higher bitrates (256kbps is mostly reliable) and is pretty random in its occurrence. At 1mbps, sometimes the stream will run for several hours without any trouble, on other occasions the stream will fail every few minutes. I've got a cron job running which restarts the stream automatically when it fails, but I would prefer to avoid the continued interruptions.



I have seen this problem reported in a handful of other forums, so this is not a unique problem, but not one of those reports had a solution attached to it. My ffmpeg command looks like this :



ffmpeg -loglevel verbose -r 25 -rtsp_transport tcp -i rtsp ://user:password@camera.url/live/ch0 -reset_timestamps 1 -movflags frag_keyframe+empty_moov -bufsize 7168k -stimeout 60000 -hls_flags temp_file -hls_time 5 -hls_wrap 180 -acodec copy -vcodec copy streaming.m3u8 > encode.log 2>&1



What gets me is that the error makes no sense, this is a live stream so output is always wanted until I shut off the stream. So having it shut down because output isn't wanted is downright odd. If ffmpeg was complaining because of a problem with input it would make more sense.



I'm running version 3.3.4, which I believe is the latest.



Update 13 Oct 17 :



After extensive testing I've established that "No more outputs" error message generated by FFMPEG is very misleading. The error seems to be generated if the data coming in from RTSP is delayed, eg by other activity on the router the camera is connected via. I've got a large buffer and timeout set which should be sufficient for 60 seconds, but I can still deliberately trigger this error with far shorter interruptions, so clearly the buffer and timeout aren't having the desired effect. This might be fixed by setting a QOS policy on the router and by checking that the TCP packets from the camera have a suitably high priority set, it's possible this isn't the case.



However, I would still like to improve the robustness of the input stream if it is briefly interrupted. Is there any way to persuade FFMPEG to tolerate this or to actually make use of the buffer it seems to be ignoring ? Can FFMPEG be persuaded to simply stop writing output and wait for input to become available rather than bailing out ? Or could I get FFMPEG to duplicate the last complete frame until it's able to get more data ? I can live with the stream stuttering a bit, but I've got to significantly reduce the current behaviour where the stream drops at the slightest hint of a problem.



Further update 13 Oct 2017 :



After more tests, I've found that the problem actually seems to be that HLS is incapable of coping with a discontinuity in the incoming video stream. If I deliberately cut the network connection between the camera and FFMPEG, FFMPEG will wait for the connection to be re-established for quite a long time. If the interruption was long (>10 seconds) the stream will immediately drop with the "No More Outputs" error the instant that the connection is re-established. If the interruption is short, then RTSP will actually start pulling data from the camera again, but the stream will then drop with the same error a few seconds later. So it seems clear that the gap in the input data is causing the HLS encoder to have a fit and give up once the stream is resumed, but the size of the gap has an impact on whether the drop is instant or not.