
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (15)
-
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
Les thèmes de MediaSpip
4 juin 20133 thèmes sont proposés à l’origine par MédiaSPIP. L’utilisateur MédiaSPIP peut rajouter des thèmes selon ses besoins.
Thèmes MediaSPIP
3 thèmes ont été développés au départ pour MediaSPIP : * SPIPeo : thème par défaut de MédiaSPIP. Il met en avant la présentation du site et les documents média les plus récents ( le type de tri peut être modifié - titre, popularité, date) . * Arscenic : il s’agit du thème utilisé sur le site officiel du projet, constitué notamment d’un bandeau rouge en début de page. La structure (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (3119)
-
Resolving rtmp stream url from Adobe Flash container
14 avril 2013, par MustafeHow can a rtmp stream url be retrieved completely with its playpath and be played with ffplay/avconv from a flash plugin. In embedded flash code below rtmp address exits (rtmp ://live.atv.com.tr/atv) , however it does not work with avplay since it needs playpath.
/i.tmgrup.com.tr/p/flowplayer.controls.swf"},"influxis":{"url":"http://i.tmgrup.com.tr/p/flowplayer.rtmp.swf","netConnectionUrl":"rtmp://live.atv.com.tr/atv"},"ova":{"url":"http://i.tmgrup.com.tr/p/ova.swf","autoPlay":true,"overlays":{"regions":[{"id":"Reklam","verticalAlign":"bottom","horizontalAlign":"right","backgroundC...0,"style":".smalltext { font-style: italic; font-size:10; }"}]},"ads":{"notice":{"show":true,"region":"my-ad-notice","message":"<p class="\"smalltext\"" align="\"right\""> Kalan süre : _countdown_ saniye.</p>"},"schedule":[{"zone":"5","position":"pre-roll","server":{"type":"direct","apiAddress":"http%3a%2f%2fad.reklamport.com%2frpgetad.ashx%3ftt%3dt_atv_canli_yayin_preroll_800x700%26vast%3d2"}}]}}},"playerId":"live","playlist":[{"eventCategory":"Canli Yayin","url":"atv3","live":true,"provider":"influxis"}]}" name="flashvars">
Problem Solved : Using wireshark (network analyzer) is much more effective to retrieve paramaters like rtmp url, playpath...
Edit2 : Some urls are also embedded in scripts files rather than directly in flash object, this variables are used in flash object above, later. -
Elacarte Presto Tablets
14 mars 2013, par Multimedia Mike — GeneralI visited an Applebee’s restaurant this past weekend. The first thing I spied was a family at a table with what looked like a 7-inch tablet. It’s not an uncommon sight. However, as I moved through the restaurant, I noticed that every single table was equipped with such a tablet. It looked like this :
For a computer nerd like me, you could probably guess that I was be far more interested in this gadget than the cuisine. The thing said “Presto” on the front and “Elacarte” on the back. Putting this together, we get the website of Elacarte, the purveyors of this restaurant tablet technology. Months after the iPad was released on 2010, I remember stories about high-end restaurants showing their wine list via iPads. This tablet goes well beyond that.
How was it ? Well, confusing, mostly. The hostess told us we could order through the tablet or through her. Since we already knew what we wanted, she just manually took our order and presumably entered it into the system. So, right away, the question is : Do we order through a human or through a computer ? Or a combination ? Do we have to use the tablet if we don’t want to ?
Hardware
When picking up the tablet, it’s hard not to notice that it is very heavy. At first, I suspected that it was deliberately weighted down as some minor attempt at an anti-theft measure. But then I remembered what I know about power budgets of phones and tablets– powering the screen accounts for much of the battery usage. I realized that this device needs to drive the screen for about 14 continuous hours each day. I.e., the weight must come from a massive battery.The screen is good. It’s a capacitive touchscreen, so nice and responsive. When I first spied the device, I felt certain it would be a resistive touchscreen (which is more accurately called a touch-and-press-down screen). There is an AC adapter on the side of the tablet. This is the only interface to the device :
That looks to me like an internal SATA connector (different from an eSATA connector). Foolishly, I didn’t have a SATA cable on me so I couldn’t verify.
User Interface
The interface options are : Order, Games, Neighborhood, and Pay. One big benefit of accessing the menu through the Order option is that each menu item can have a picture. For people who order more by picture than text description, this is useful. Rather, it would be, if more items had pictures. I’m not sure there were more pictures than seen in the print menu.
For Games, there were a variety of party games. The interface clearly stated that we got to play 2 free games. This implied to me that further games cost money. We tried one game briefly and the food came.2 more options : Neighborhood– I know I dug into this option, but I forget what it was. Maybe it discussed local attractions. Finally, Pay. This thing has an integrated credit card reader. There is no integrated printer, though, so if you want one, you will have to request one from a human.
Experience
So we ordered through a human since we didn’t feel like being thrust into this new paradigm when we just wanted lunch. The staff was obviously amenable to that. However, I got a chance to ask them a lot of questions about the particulars. Apparently, they have had this system for about 5 months. It was confirmed that the tablets do, in fact, have gargantuan batteries that have to last through the restaurant’s entire business hours. Do they need to be charged every night ? Yes, they do. But how ? The staff described this several large charging blocks with many cables sprouting out. Reportedly, some units still don’t make it through the entire day.When it was time to pay, I pressed the Pay button on the interface. The bill I saw had nothing in common with what we ordered (actually, it was cheaper, so perhaps I should have just accepted it). But I pointed it out to a human and they said that this happens sometimes. So they manually printed my bill. There was a dollar charge for the game that was supposed to be free. I pointed this out and they removed it. It’s minor, I know, but it’s still worth trying to work out these bugs.
One of the staff also described how a restaurant doesn’t need to employ as many people thanks to the tablet. She gave a nervous, awkward, self-conscious laugh when she said this. All I could think of was this Dilbert comic strip in which the boss realizes that his smartphone could perform certain key functions previously handled by his assistant.
Not A New Idea
Some people might think this is a totally new concept. It’s not. I was immediately reminded of my university days in Boulder, Colorado, USA, circa 1997. The local Taco Bell and Arby’s restaurants both had touchscreen ordering kiosks. Step up, interact with the (probably resistive) touchscreen, get a number, and step to the counter to change money, get your food, and probably clarify your order because there is only so much that can be handled through a touchscreen.What I also remember is when they tore out those ordering kiosks, also circa 1997. I don’t know the exact reason. Maybe people didn’t like them. Maybe there were maintenance costs that made them not worth the hassle.
Then there are the widespread self-checkout lanes in grocery stores. Personally, I like those, though I know many don’t. However, this restaurant tablet thing hasn’t won me over yet. What’s the difference ? Perhaps that automated lanes at grocery stores require zero external assistance– at least, if you do everything correctly. Personally, I work well with these lanes because I can pretty much guess the constraints of the system and I am careful not to confuse the computer in any way. Until they deploy serving droids, or at least food conveyors, there still needs to be some human interaction and I think the division between the human and computer roles is unintuitive in the restaurant case.
I don’t really care to return to the same restaurant. I’ll likely avoid any other restaurant that has these tablets. For some reason, I think I’m probably supposed to be the ideal consumer of this concept. But the idea will probably perform all right anyway. Elacarte’s website has plenty of graphs demonstrating that deploying these tablets is extremely profitable.
-
encode h264 video using ffmpeg library memory issues
31 mars 2015, par ZeppaI’m trying to do screen capture on OS X using ffmpeg’s
avfoundation
library. I capture frames from the screen and encode it using H264 into an flv container.Here’s the command line output of the program :
Input #0, avfoundation, from 'Capture screen 0':
Duration: N/A, start: 9.253649, bitrate: N/A
Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 1440x900, 14.58 tbr, 1000k tbn, 1000k tbc
raw video is inCodec
FLV (Flash Video)http://localhost:8090/test.flv
[libx264 @ 0x102038e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 @ 0x102038e00] profile High, level 4.0
[libx264 @ 0x102038e00] 264 - core 142 r2495 6a301b6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2014 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=1 weightp=2 keyint=50 keyint_min=5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=abr mbtree=1 bitrate=400 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[tcp @ 0x101a5fe70] Connection to tcp://localhost:8090 failed (Connection refused), trying next address
[tcp @ 0x101a5fe70] Connection to tcp://localhost:8090 failed: Connection refused
url_fopen failed: Operation now in progress
[flv @ 0x102038800] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
encoded frame #0
encoded frame #1
......
encoded frame #49
encoded frame #50
testmee(8404,0x7fff7e05c300) malloc: *** error for object 0x102053e08: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
(lldb) bt
* thread #10: tid = 0x43873, 0x00007fff95639286 libsystem_kernel.dylib`__pthread_kill + 10, stop reason = signal SIGABRT
* frame #0: 0x00007fff95639286 libsystem_kernel.dylib`__pthread_kill + 10
frame #1: 0x00007fff9623742f libsystem_pthread.dylib`pthread_kill + 90
frame #2: 0x00007fff977ceb53 libsystem_c.dylib`abort + 129
frame #3: 0x00007fff9ab59e06 libsystem_malloc.dylib`szone_error + 625
frame #4: 0x00007fff9ab4f799 libsystem_malloc.dylib`small_malloc_from_free_list + 1105
frame #5: 0x00007fff9ab4d3bc libsystem_malloc.dylib`szone_malloc_should_clear + 1449
frame #6: 0x00007fff9ab4c877 libsystem_malloc.dylib`malloc_zone_malloc + 71
frame #7: 0x00007fff9ab4b395 libsystem_malloc.dylib`malloc + 42
frame #8: 0x00007fff94aa63d2 IOSurface`IOSurfaceClientLookupFromMachPort + 40
frame #9: 0x00007fff94aa6b38 IOSurface`IOSurfaceLookupFromMachPort + 12
frame #10: 0x00007fff92bfa6b2 CoreGraphics`_CGYDisplayStreamFrameAvailable + 342
frame #11: 0x00007fff92f6759c CoreGraphics`CGYDisplayStreamNotification_server + 336
frame #12: 0x00007fff92bfada6 CoreGraphics`display_stream_runloop_callout + 46
frame #13: 0x00007fff956eba07 CoreFoundation`__CFMachPortPerform + 247
frame #14: 0x00007fff956eb8f9 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 41
frame #15: 0x00007fff956eb86b CoreFoundation`__CFRunLoopDoSource1 + 475
frame #16: 0x00007fff956dd3e7 CoreFoundation`__CFRunLoopRun + 2375
frame #17: 0x00007fff956dc858 CoreFoundation`CFRunLoopRunSpecific + 296
frame #18: 0x00007fff95792ef1 CoreFoundation`CFRunLoopRun + 97
frame #19: 0x0000000105f79ff1 CMIOUnits`___lldb_unnamed_function2148$$CMIOUnits + 875
frame #20: 0x0000000105f6f2c2 CMIOUnits`___lldb_unnamed_function2127$$CMIOUnits + 14
frame #21: 0x00007fff97051765 CoreMedia`figThreadMain + 417
frame #22: 0x00007fff96235268 libsystem_pthread.dylib`_pthread_body + 131
frame #23: 0x00007fff962351e5 libsystem_pthread.dylib`_pthread_start + 176
frame #24: 0x00007fff9623341d libsystem_pthread.dylib`thread_start + 13I’ve attached the code I used below.
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libswscale></libswscale>swscale.h>
#include <libavdevice></libavdevice>avdevice.h>
#include <libavutil></libavutil>opt.h>
#include
#include
#include
/* compile using
gcc -g -o stream test.c -lavformat -lavutil -lavcodec -lavdevice -lswscale
*/
// void show_av_device() {
// inFmt->get_device_list(inFmtCtx, device_list);
// printf("Device Info=============\n");
// //avformat_open_input(&inFmtCtx,"video=Capture screen 0",inFmt,&inOptions);
// printf("===============================\n");
// }
void AVFAIL (int code, const char *what) {
char msg[500];
av_strerror(code, msg, sizeof(msg));
fprintf(stderr, "failed: %s\nerror: %s\n", what, msg);
exit(2);
}
#define AVCHECK(f) do { int e = (f); if (e < 0) AVFAIL(e, #f); } while (0)
#define AVCHECKPTR(p,f) do { p = (f); if (!p) AVFAIL(AVERROR_UNKNOWN, #f); } while (0)
void registerLibs() {
av_register_all();
avdevice_register_all();
avformat_network_init();
avcodec_register_all();
}
int main(int argc, char *argv[]) {
//conversion variables
struct SwsContext *swsCtx = NULL;
//input stream variables
AVFormatContext *inFmtCtx = NULL;
AVCodecContext *inCodecCtx = NULL;
AVCodec *inCodec = NULL;
AVInputFormat *inFmt = NULL;
AVFrame *inFrame = NULL;
AVDictionary *inOptions = NULL;
const char *streamURL = "http://localhost:8090/test.flv";
const char *name = "avfoundation";
// AVFrame *inFrameYUV = NULL;
AVPacket inPacket;
//output stream variables
AVCodecContext *outCodecCtx = NULL;
AVCodec *outCodec;
AVFormatContext *outFmtCtx = NULL;
AVOutputFormat *outFmt = NULL;
AVFrame *outFrameYUV = NULL;
AVStream *stream = NULL;
int i, videostream, ret;
int numBytes, frameFinished;
registerLibs();
inFmtCtx = avformat_alloc_context(); //alloc input context
av_dict_set(&inOptions, "pixel_format", "uyvy422", 0);
av_dict_set(&inOptions, "probesize", "7000000", 0);
inFmt = av_find_input_format(name);
ret = avformat_open_input(&inFmtCtx, "Capture screen 0:", inFmt, &inOptions);
if (ret < 0) {
printf("Could not load the context for the input device\n");
return -1;
}
if (avformat_find_stream_info(inFmtCtx, NULL) < 0) {
printf("Could not find stream info for screen\n");
return -1;
}
av_dump_format(inFmtCtx, 0, "Capture screen 0", 0);
// inFmtCtx->streams is an array of pointers of size inFmtCtx->nb_stream
videostream = av_find_best_stream(inFmtCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &inCodec, 0);
if (videostream == -1) {
printf("no video stream found\n");
return -1;
} else {
printf("%s is inCodec\n", inCodec->long_name);
}
inCodecCtx = inFmtCtx->streams[videostream]->codec;
// open codec
if (avcodec_open2(inCodecCtx, inCodec, NULL) > 0) {
printf("Couldn't open codec");
return -1; // couldn't open codec
}
//setup output params
outFmt = av_guess_format(NULL, streamURL, NULL);
if(outFmt == NULL) {
printf("output format was not guessed properly");
return -1;
}
if((outFmtCtx = avformat_alloc_context()) < 0) {
printf("output context not allocated. ERROR");
return -1;
}
printf("%s", outFmt->long_name);
outFmtCtx->oformat = outFmt;
snprintf(outFmtCtx->filename, sizeof(outFmtCtx->filename), streamURL);
printf("%s\n", outFmtCtx->filename);
outCodec = avcodec_find_encoder(AV_CODEC_ID_H264);
if(!outCodec) {
printf("could not find encoder for H264 \n" );
return -1;
}
stream = avformat_new_stream(outFmtCtx, outCodec);
outCodecCtx = stream->codec;
avcodec_get_context_defaults3(outCodecCtx, outCodec);
outCodecCtx->codec_id = AV_CODEC_ID_H264;
outCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
outCodecCtx->flags = CODEC_FLAG_GLOBAL_HEADER;
outCodecCtx->width = inCodecCtx->width;
outCodecCtx->height = inCodecCtx->height;
outCodecCtx->time_base.den = 25;
outCodecCtx->time_base.num = 1;
outCodecCtx->pix_fmt = AV_PIX_FMT_YUV420P;
outCodecCtx->gop_size = 50;
outCodecCtx->bit_rate = 400000;
//setup output encoders etc
if(stream) {
ret = avcodec_open2(outCodecCtx, outCodec, NULL);
if (ret < 0) {
printf("Could not open output encoder");
return -1;
}
}
if (avio_open(&outFmtCtx->pb, outFmtCtx->filename, AVIO_FLAG_WRITE ) < 0) {
perror("url_fopen failed");
}
avio_open_dyn_buf(&outFmtCtx->pb);
ret = avformat_write_header(outFmtCtx, NULL);
if (ret != 0) {
printf("was not able to write header to output format");
return -1;
}
unsigned char *pb_buffer;
int len = avio_close_dyn_buf(outFmtCtx->pb, (unsigned char **)(&pb_buffer));
avio_write(outFmtCtx->pb, (unsigned char *)pb_buffer, len);
numBytes = avpicture_get_size(PIX_FMT_UYVY422, inCodecCtx->width, inCodecCtx->height);
// Allocate video frame
inFrame = av_frame_alloc();
swsCtx = sws_getContext(inCodecCtx->width, inCodecCtx->height, inCodecCtx->pix_fmt, inCodecCtx->width,
inCodecCtx->height, PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL);
int frame_count = 0;
while(av_read_frame(inFmtCtx, &inPacket) >= 0) {
if(inPacket.stream_index == videostream) {
avcodec_decode_video2(inCodecCtx, inFrame, &frameFinished, &inPacket);
// 1 Frame might need more than 1 packet to be filled
if(frameFinished) {
outFrameYUV = av_frame_alloc();
uint8_t *buffer = (uint8_t *)av_malloc(numBytes);
int ret = avpicture_fill((AVPicture *)outFrameYUV, buffer, PIX_FMT_YUV420P,
inCodecCtx->width, inCodecCtx->height);
if(ret < 0){
printf("%d is return val for fill\n", ret);
return -1;
}
//convert image to YUV
sws_scale(swsCtx, (uint8_t const * const* )inFrame->data,
inFrame->linesize, 0, inCodecCtx->height,
outFrameYUV->data, outFrameYUV->linesize);
//outFrameYUV now holds the YUV scaled frame/picture
outFrameYUV->format = outCodecCtx->pix_fmt;
outFrameYUV->width = outCodecCtx->width;
outFrameYUV->height = outCodecCtx->height;
AVPacket pkt;
int got_output;
av_init_packet(&pkt);
pkt.data = NULL;
pkt.size = 0;
outFrameYUV->pts = frame_count;
ret = avcodec_encode_video2(outCodecCtx, &pkt, outFrameYUV, &got_output);
if (ret < 0) {
fprintf(stderr, "Error encoding video frame: %s\n", av_err2str(ret));
return -1;
}
if(got_output) {
if(stream->codec->coded_frame->key_frame) {
pkt.flags |= AV_PKT_FLAG_KEY;
}
pkt.stream_index = stream->index;
if(pkt.pts != AV_NOPTS_VALUE)
pkt.pts = av_rescale_q(pkt.pts, stream->codec->time_base, stream->time_base);
if(pkt.dts != AV_NOPTS_VALUE)
pkt.dts = av_rescale_q(pkt.dts, stream->codec->time_base, stream->time_base);
if(avio_open_dyn_buf(&outFmtCtx->pb)!= 0) {
printf("ERROR: Unable to open dynamic buffer\n");
}
ret = av_interleaved_write_frame(outFmtCtx, &pkt);
unsigned char *pb_buffer;
int len = avio_close_dyn_buf(outFmtCtx->pb, (unsigned char **)&pb_buffer);
avio_write(outFmtCtx->pb, (unsigned char *)pb_buffer, len);
} else {
ret = 0;
}
if(ret != 0) {
fprintf(stderr, "Error while writing video frame: %s\n", av_err2str(ret));
exit(1);
}
fprintf(stderr, "encoded frame #%d\n", frame_count);
frame_count++;
av_free_packet(&pkt);
av_frame_free(&outFrameYUV);
av_free(buffer);
}
}
av_free_packet(&inPacket);
}
av_write_trailer(outFmtCtx);
//close video stream
if(stream) {
avcodec_close(outCodecCtx);
}
for (i = 0; i < outFmtCtx->nb_streams; i++) {
av_freep(&outFmtCtx->streams[i]->codec);
av_freep(&outFmtCtx->streams[i]);
}
if (!(outFmt->flags & AVFMT_NOFILE))
/* Close the output file. */
avio_close(outFmtCtx->pb);
/* free the output format context */
avformat_free_context(outFmtCtx);
// Free the YUV frame populated by the decoder
av_free(inFrame);
// Close the video codec (decoder)
avcodec_close(inCodecCtx);
// Close the input video file
avformat_close_input(&inFmtCtx);
return 1;
}I’m not sure what I’ve done wrong here. But, what I’ve observed is that for each frame thats been encoded, my memory usage goes up by about 6MB. Backtracking afterward usually leads one of the following two culprits :
- avf_read_frame function in avfoundation.m
- av_dup_packet function in avpacket.h
Can I also get advice on the way I’m using avio_open_dyn_buff function to be able to stream over http? I’ve also attached my ffmpeg library versions below :
ffmpeg version N-70876-g294bb6c Copyright (c) 2000-2015 the FFmpeg developers
built with Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn)
configuration: --prefix=/usr/local --enable-gpl --enable-postproc --enable-pthreads --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-libvorbis --disable-mmx --disable-ssse3 --disable-armv5te --disable-armv6 --disable-neon --enable-shared --disable-static --disable-stripping
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 29.100 / 56. 29.100
libavformat 56. 26.101 / 56. 26.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 13.101 / 5. 13.101
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
Hyper fast Audio and Video encoderValgrind analysis attached here because I exceeded stack overflow’s character limit. http://pastebin.com/MPeRhjhN