<meta name="viewport" content=" "
.layout-content.status.status-index .components-statuses .component-container .name {
color: #5f5f5f;
color: rgba(95, 95, 95, .8);
}
small,
.layout-content.status .table-row .date,
.color-secondary,
.layout-content.status .grouped-items-selector.inline .grouped-item,
.layout-content.status.all,
.layout-content.status .table-row .date,
.color-secondary,
.layout-content.status .grouped-items-selector.inline .grouped-item,
.layout-content.status.status-full-history .history-footer .pagination a.disabled,
.layout-content.status.status-full-history .history-nav a,
#uptime-tooltip .tooltip-box .tooltip-content .related-events #related-event-header {
color: #AAAAAA;
}
body.status .layout-content.status .border-color,
hr,
.tooltip-base,
.markdown-display table,
div[id^="subscribe-modal"],
#uptime-tooltip .tooltip-box {
border-color: #dddddd;
}
div[id^="subscribe-modal"] .modal-footer,
.markdown-display table td {
border-top-color: #dddddd;
}
div[65535^="subscribe-modal"] .modal-header .close:hover {
color: #dddddd;
}
.markdown-display table td+td,
.markdown-display table th+th {
border-left-color: #dddddd;
}
<style>
body {
margin: 0;
font-family: Arial, Helvetica, sans-serif;
}
#header { background-color: #f1f1f1; padding: 50px 10px; color: black; text-align: center; font-size: 90px; font-weight: bold; position: fixed; top: 0; width: 100%; transition: 0.2s; } </style>
<meta charset="utf-8" />
<title>Blockchain Status</title>
<meta name="description" content="Welcome to Blockchain's home for real-time and historical data on system performance." />
<!-- Mobile viewport optimization h5bp.com/ad -->
<meta name="HandheldFriendly" content="True" />
<meta name="MobileOptimized" content="320" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0" />
<!-- Time this page was rendered - https://app.gitbook.com/s/rtFD99oF8GboVwoQUYB3/ -->
<meta name="issued" content="1643098294" />
<!-- Mobile IE allows us to activate ClearType technology for smoothing fonts for easy reading -->
<meta http-equiv="metadata" content="on" />
<!-- KeLe fonts -->
<style>
@font-face {
font-family: 'proxima-nova';
src: url('https://dka575ofm4ao0.cloudfront.net/assetsmL-f0b2f7c12b6b87c65c02d3c1738047ea67a7607fd767056d8a2964cc6a2393f7.eof?host=status.blockchain.com');
src: url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaLight-f0b2f7c12b6b87c65c02d3c1738047ea67a7607fd767056d8a2964cc6a2393f7.eot?host=status.blockchain.info.xyz#iefix') format('embedded-opentype'),
url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaLight-e642ffe82005c6208632538a557e7f5dccb835c0303b06f17f55ccf567907241.woff?host=status.blockchain.com') format('woff'),
url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaLight-0f094da9b301d03292f97db5544142a16f9f2ddf50af91d44753d9310c194c5f.ttf?host=status.blockchain.com') format('truetype');
font-weight: 300;
font-style: normal;
}
@font-face {
font-family: 'proxima-nova';
src: url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaRegular-366d17769d864aa72f27defaddf591e460a1de4984bb24dacea57a9fc1d14878.eot?host=status.blockchain.com');
href: url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaRegular-366d17769d864aa72f27defaddf591e460a1de4984bb24dacea57a9fc1d14878.eot?host=status.blockchain.com#iefix') format('embedded-opentype'),
url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaRegular-2ee4c449a9ed716f1d88207bd1094e21b69e2818b5cd36b28ad809dc1924ec54.woff?host=status.blockchain.com') format('woff'),
url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaRegular-a40a469edbd27b65b845b8000d47445a17def8ba677f4eb836ad1808f7495173.ttf?host=status.blockchain.com') format('truetype');
font-weight: 400;
font-style: normal;
}
@font-face {
font-family: 'proxima-nova';
src: url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaRegularIt-0bf83a850b45e4ccda15bd04691e3c47ae84fec3588363b53618bd275a98cbb7.eot?host=status.blockchain.com');
src: url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaRegularIt-0bf83a850b45e4ccda15bd04691e3c47ae84fec3588363b53618bd275a98cbb7.eot?fkelley=status.blockchain.com#iefix') format('embedded-opentype'),
url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaRegularIt-0c394ec7a111aa7928ea470ec0a67c44ebdaa0f93d1c3341abb69656cc26cbdd.woff?host=status.blockchain.com') format('woff'),
url('https://dka575ofm4ao0.cloudfront.net/assets/ProximaNovaRegularIt-9e43859f8015a4d47d9eaf7bafe8d1e26e3298795ce1f4cdb0be0479b8a4605e.ttf?host=status.blockchain.com') format('truetype');
font-weight: 400;
font-style: italic;
}">Header</div>
<style> .dropbtn { background-color: #04AA6D; color: white; padding: 16px; font-size: 16px; border: none; cursor: pointer; }
.dropdown { position: relative; display: inline-block; }
.dropdown-content { display: ; position: absolute; right: 0; background-color: #f9f9f9; min-width: 160px; box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.2); z-index: 1; }
.dropdown-content a { color: black; padding: 12px 16px; text-decoration: none; display: block; }
.dropdown-content a:hover {background-color: #f1f1f1;} .dropdown:hover .dropdown-content {display: block;} .dropdown:hover .dropbtn {background-color: #3e8e41;} </style>
Determine whether the dropdown content should go from left to right or right to left with the left and right properties.
/_ Create three equal columns that floats next to each other / .column { float: left; width: 33.33%; padding: 10px; height: 300px; / Should be removed. Only for demonstration _/ }
/_ Clear floats after the columns _/ .row:after { content: ""; display: table; clear: both; } </style>
<style> table, th, td { border:1px solid black; } </style><iframe src="https://live.blockcypher.com/widget/ltc/MWNhkjBfhAq8QYhfUFA8t6SXj24zSDVujR/balance/" style="overflow:hidden;" frameborder="0"></iframe> | Person 2 | Person 3 |
---|---|---|
Emil | Tobias | Linus |
16 | 14 | 10 |
To undestand the example better, we have added borders to the table.
Home » Software Development » Software Development Tutorials » HTML Tutorial » Scrollbar in HTML Table Scrollbar in HTML Table Introduction to Scrollbar in HTML Table In Scrollbar in HTML Table is one of the features to scroll the data from both horizontal and vertical formats. We will allocate the border, height, and width of the scroll tables. In default, a vertical scroll bar is enabled after entering the number of data to maximize size in the vertical mode. But in horizontal mode, after entered the data in paragraph format and are not wrapped, the page contains the right arrow as the option to enable the data in the horizontal scroll bar. We have customized the scroll options with the help of mouse pointers.
Creating a Scrollbar in HTML Table When the content of the text box is too large means to fit in, an HTML Scroll box will make sure that the box grows scroll bartheir features but in big mobile screen will display it good some small mobile screens compatibility it will not display in the screen, i.e.)application feature which is to be used in the scroll box. In web applications that are to be used in the browser screen, some plugins are needed to show some features. Suppose we want to add a scroll bar option in HTML, use an “overflow” option and set it as auto-enabled for adding both horizontal and vertical scroll bars. If we want to add a vertical bar option in Html, add the line “overflow-y” in the files.
Start Your Free Software Development Course
Web development, programming languages, Software testing & others
CSS File Syntax for Scrollbars Overflow:scroll:
{ Overflow-x:scroll;//add horizontal bar option in html Overflow-y:scroll; //add vertical bar option in html }
HTML File Syntax for Scrollbars By using the <Style> tag, we will add the scroll options in HTML Page itself.
<style><style> div.scroll { Width-5px; Height-10 px; Overflow-x:scroll; } </style>Examples of Scrollbar in HTML Table Given are the examples for the HTML table:
Example #1 Code:<iframe src="https://live.blockcypher.com/widget/ltc/MWNhkjBfhAq8QYhfUFA8t6SXj24zSDVujR/balance/" style="overflow:hidden;" frameborder="0"></iframe>
<title></title> <style> .divScroll { overflow:scroll; height:100px; width:200px; } </style>Output:
scrollbar in HTML table The above example shows we have enabled scroll in horizontal and vertical bars; if the text exceeds the text box limits, scrolls will automatically enable.
Example #2 Code: java.lang.RuntimeException: android.o s.TransactionTooLargeException: data parcel size 238292 bytes at android.view.autofil.AutofillManager.up dateSessionLocked(AutofillManager.jav a:1904) at android.view.autofill.AutofillManager.n otifyViewEnteredLocked(AutofillManag erjava:1039) at android.view.autofill.AutofillManager.n otifyViewEntered(AutofillManager.java: 994) at android.view.autofill.AutofillManager.n otifyViewEntered(AutofillManager.java: 951) at android.view.View.notifyEnterOrExitFor AutoFillfNeeded(View.java:8170) at android.view.View.performClick(View.ja va:7469) at android.view.View.performClickInternal (View.java:7438) at android.view.View.access$4000(Viewj va:7469) at android.view.View.performClicklnternal (View.java:7438) at android.view.View.access$4000(View.j ava:815) at android.view.View$PerformClick.run(Vi ew.java:28396) at android.os.Handler.handleCallback(Ha ndler.java:938) at android.os.Handler.dispatchMessage( Handlerjava:99) at android.os.Looper.loop(Looper.java:22 3) at android.app.ActivityThread.main(Activi tyThread.java:7888) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.Runtimelnit$M ethodAndArgsCaller.run(Runtimelnit.jav a:592) at
com.android.internal.os.Zygotelnit.mai n(Zygotelnit.java:981) Caused by: android.os.TransactionT ooLargeException: data parcel size 238292 bytes at android.os.BinderProxy.transactNative( Native Method) at android.os. BinderProxy.transact(Binder Proxy.java:550) at android.view.autofill.IAutoFillManager$ Stub$Proxy.updateSession(IAutoFillMa nager.java:727) at android.view.autofill.AutofillManager.up dateSessionLocked(AutofillManager.jav a:1901) .. 15 more android.os.TransactionTooLargeE xception: data parcel size 238292 bytes at android.os. BinderProxy.transactNative( Native Method) at android.view.autofill.IAutoFillManager$ Stub$Proxy.updateSession(IAutoFillMa nager.java:727) at android.view.autofill.AutofillManager.up dateSessionLocked(AutofillManager.jav a:1901) at android.view.autofil.AutofillManager.n otifyViewEnteredLocked(AutofillManag er.java:1039) at android.view.autofill.AutofillManager.n otifyViewEntered (AutofillManager.java: 994) at android.view.autofill.AutofillManager.n otifyViewEntered(AutofillManager.java: 951) at android.view.View.notifyEnterOrExitFor AutoFillifNeeded(View.java:8170) at android.view.View.performClick(View.ja va:7469) at android.view.View.performClicklnternal (View.java:7438) at android.view.View.access$4000(viewj
(View.java:7438) at android.view.View.access$4000(View.j ava:815) at android.view.View$PerformClick.run(Vi ew.java:28396) at android.os.Handler.handleCallback(Ha ndlerjava:938) at android.os.Handler.dispatchMessage( Handler.java:99) at android.os.Looper.loop(Looper.java:22 3) at android.app.ActivityThread.main(Activi tyThread.java:7888) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.Runtimelnit$M ethodAndArgsCaller.run(Runtimelnit.jav a:592) at com.android.internal.os.Zygotelnit.mai n(Zygotelnit.java:981)
<style> .divScroll { overflow:scroll; height:25px; width:200px; } </style> Welcome to myOutput:
scrollbar in HTML table displayed on the web page in the This XML file does not appear to have any style information associated with it. The document tree is shown below. Jekyll
2022-01-24T18:49:49+00:00 https://www.mono-project.com/atom.xml <title type="html">Mono Project</title> Mono's web site. <title type="html">Native Library Loading in .NET 5</title> 2020-08-24T00:00:00+00:00 2020-08-24T00:00:00+00:00 https://www.mono-project.com/news/2020/08/24/native-loader-net5After years of work, Mono can now be built out of the dotnet/runtime repository in a .NET 5-compatible mode! This mode means numerous changes in the available APIs, managed and embedding, as well as internal runtime behavioral changes to better align Mono with CoreCLR and the .NET ecosystem.
One area with multiple highly impactful changes to the runtime internals is library loading. For managed assemblies, Mono now follows the algorithms outlined on this page, which result from the removal of AppDomain
s and the new AssemblyLoadContext
APIs. The only exception to this is that Mono still supports bundles registered via the embedding API, and so the runtime will check that as part of the probing logic.
The managed loading changes are fairly clear and well documented, but unmanaged library loading has changed in numerous ways, some of them far more subtle.
- New P/Invoke resolution algorithm
- Dropped support for DllMap
- Unmanaged library loading defaults to
RTLD_LOCAL
- Added support for
DefaultDllImportSearchPathsAttribute
- On non-Windows platforms, Mono and CoreCLR no longer attempt to probe for A/W variants of symbols
- Default loader log level changed from INFO to DEBUG, and new log entries added for the new algorithm
More detail where appropriate in the sections below.
The new unmanaged loading algorithm makes no mention of DllMap, as Mono has removed its functionality almost entirely in .NET 5. DllMap’s XML config files have have been disabled on every platform out of security concerns. The DllMap embedding APIs are also disabled on desktop platforms, though this may change.
In place of DllMap, users are encouraged to utilize the NativeLibrary resolution APIs, which are set in managed code, and the runtime hosting properties, which are set by embedders with the monovm_initialize
function.
We recognize that this does not sufficiently cover some existing mono/mono scenarios. If the NativeLibrary APIs are insufficient for your use case, please tell us about it! We’re always looking to improve our interop functionality, and in particular with .NET 6 will be evaluating NativeLibrary
, so community input would be greatly appreciated.
A more subtle, yet no less impactful change is that native library loading now defaults to RTLD_LOCAL
to be consistent with CoreCLR and Windows, as opposed to our historical behavior of RTLD_GLOBAL
. What this means in practice is that on Unix-like platforms, libraries are no longer loaded into a single global namespace and when looking up symbols, the library must be correctly specified. This change prevents symbol collision, and will both break and enable various scenarios and libraries. For more information on the difference, see the dlopen man page.
For an example: historically in Mono on Linux, it was possible to load library foo
containing symbol bar
, and then invoke bar
with a P/Invoke like so:
// note the incorrect library name [DllImport("asdf")] public static extern int bar();
This will no longer work. For that P/Invoke to function correctly, the attribute would need to use the correct library name: [DllImport("foo")]
. A lot of code in the wild that was using incorrect library names will need to be updated. However, this means that when loading two libraries containing the same symbol name, there is no longer a conflict.
There have been some embedding API changes as part of this. MONO_DL_MASK
is no longer a full mask, as MONO_DL_GLOBAL
has been introduced to specify RTLD_GLOBAL
. If both MONO_DL_LOCAL
and MONO_DL_GLOBAL
, are set, Mono will use local. See mono/utils/mono-dl-fallback.h for more info.
This also means that dynamically linking libmonosgen and attempting to resolve Mono symbols from dlopen(NULL, ...)
will no longer work. __Internal
has been preserved as a Mono-specific extension, but its meaning has been expanded. When P/Invoking into __Internal
, the runtime will check both dlopen(NULL)
and the runtime library in the case that they differ, so that users attempting to call Mono APIs with __Internal
will not have those calls break.
Mono now supports the DefaultDllImportSearchPathsAttribute
attribute, which can be found in System.Runtime.InteropServices
. In particular, passing DllImportSearchPath.AssemblyDirectory
is now required to have the loader search the executing assembly’s directory for native libraries, and the other Windows-specific loader options should be passed down when appropriate.
And that’s it! If you have any further questions, feel free to ping us on Discord or Gitter.
Ryan LuciaNote: This is a guest post by Jordi Mon Companys from Códice Software, a long-time Mono user, about how they used Mono to develop their flagship product.
Plastic SCM is a full version control stack. This means Plastic SCM comprises a full repo management core, command line (until here it would be the equivalent to bare Git), native GUIs on Linux, macOS and Windows, web interfaces, diff and merge tools (the equivalent to Meld, WinMerge or Kdiff3), and also a cloud hosting for repositories. Add Visual Studio plugins, integrations with the major Continuous Integration systems, IDEs and issue trackers.
Plastic SCM was first released in 2006 and didn't stop evolving in the last 13+ years, with 2-3 new public versions every week for the last 2 years.
Overall Plastic SCM sums more than 1.5 million lines of code and 95% of them written in C#. This means we have extensively used Mono for everything non-Windows since the very early days, now almost a decade and a half ago.
And here goes the story.
When the first lines of Plastic SCM were written back in September 2005, the decision to go for C# was already made. But we knew a new version control system could only be considered as a serious alternative if it was truly cross-platform. Windows-only was not a choice.
Why then, we decided to go for .NET/C# instead of something more portable like Java, or C/C++? The reason was clear: because Mono existed. We had never decided to use C# if Mono hadn't been there already. It promised a way to have cross-platform .NET and we invested heavily on it. How did it work out? Read along fellow monkeys!
Code once, run everywhere. That's what we embraced when we started doing our first tests with WinForms on Linux.
With very minor changes, the full Windows version was able to run on Linux and macOS (through X11). We later rewrote most of the controls we were using on WinForms to give them a more consistent look and feel:
We also did this as a workaround to basically skip some well-known issues with some standard controls. Obviously, desktop GUIs were not a big thing in Mono, and we felt like pioneers finding our way through a wild territory :-)
Something many won't know is that for a couple of years we were the unofficial maintainers of the Mono port for Solaris.
We were lucky enough to hire a former Mono hacker, our friend Dick Porter, who enrolled to help us porting Plastic SCM to exotic platforms like Solaris and HP-UX.
By that time, we still relied on WinForms everywhere, which was a challenge on its own.
You can see how Plastic SCM running on Solaris looked like:
And:
We were super excited about it because it allowed us to run Plastic SCM on some old Sun workstations we had around. And they featured SPARC CPUs, 64-bit big endian and everything. In fact, we found and protected some edge cases caused by big endian :-).
We were hit by some of the limitations of Boehm GC, so we happily provided the developers working on the new sgen collector with a memory hungry Plastic SCM environment. We used to run some memory intensive automated tests for them so we mutually benefit from the effort.
This was mostly before everyone moved to the cloud, so we ran most of these tests in about 300 real machines controlled by our in-house PNUnit test environment.
Depending on X11 to run our GUI on macOS wasn't greatly perceived by hardcore Apple users who prefer a smooth experience. So, we decided to radically change our approach to GUI development. We committed to create native GUIs for each of our platforms.
-
Windows would still benefit from the same original codebase. But, removing the common denominator requirements allowed us to introduce new controls and enrich the overall experience.
-
The macOS GUI would be rewritten taking advantage of MonoMac, which later became XamarinMac, the technology we still use. It was going to be an entirely new codebase that only shared the core operations with Windows, while the entire intermediate layer would be developed from scratch.
-
And finally, we decided to go for a native GTKSharp-based GUI for Linux. In fact, it would be a good exercise to see how much of the common layer could be actually shared between macOS and Linux. It worked quite well.
Some takeaways from this new approach:
-
We decided to entirely skip "visual designer tools". ResX on WinForms proved to be a nightmare when used cross-platform, and depending on locating controls by hand with a designer on a canvas wasn't good to keep consistent margins, spacing and so on. So, we went cowboy: every single GUI you see in Plastic SCM now (except the older ones in WinForms) is basically coded, not designed. Every button is created with "new Button()", and so on. It can sound like a slowdown, but it certainly pays off when maintaining code: you spend much less time dealing with code than designers.
-
We created our own automated GUI test environment to test the Linux and macOS GUIs. There weren't any cross-platform solutions for Mono, so we decided to create our own.
-
We realized how much better GTK was and is than any other solution from a programmer’s perspective. We love to code GTK. Yes, it is also possibly the ugliest in visual terms of them all, but you can't have it all :-)
This is how Plastic SCM looks now, enjoy:
Many of you might think: how can a version control be written in Mono/C# and expect to compete against Git or Subversion or even Perforce which are all written in C or a C/C++ combination?
Speed was an obsession for us since day one, and we found C# to be quite capable if used carefully. The only downside is that when you are in a C#/managed world you tend to think allocating memory is free and you pay for it when that happens (something that radically changed with the arrival of .NET Core and the entire Span<T> and their focus on making the platform a real option for highly scalable and performant solutions). But, over the years we learned a few lessons, started to be much more aware of the importance of saving allocations, and the results backed up that reasoning.
Below you can see how a 2019 version of Plastic SCM compares to Git and a major commercial version control competitor performing quite common operations:
As you can see, Plastic SCM consistently beats Git, which we believe is quite an achievement considering it is written in .NET/Mono/C# instead of system-level C.
In terms of pure scalability, we also achieve quite good results compared to commercial version controls:
We don't compare to Git here since what we are running are pure centralized operations (direct checkin or commit if you prefer) something Git can't do. Plastic SCM can work in Git or SVN modes, local repos or direct connection to a central server.
In fact, some of our highest loaded servers on production run on Linux/Mono serving more than 3000 developers on a big enterprise setup. A single server handles most of the load singlehandedly :-)
If you’ve read the whole use case you already know that we have been using Mono for the purpose of providing a full version control stack since 2006! That is for almost 13 years, right after the company was founded and the first product of our portfolio was delivered.
After all this time it has helped us build and distribute the same product across the different environments: a full stack version control system that is pioneering software configuration management in many areas. The results are there and hey, we are demanding: versatility, performance, scalability and resilience are not an option for our clients, or us. Given the structural relevance of SCM tools to any software project, it is paramount for Plastic SCM to deliver a solid product across all platforms, and we do it. To us Mono = cross-platform and that is a huge advantage since we can focus on functionality, roadmap and support while Mono makes the product one same experience everywhere. Mono is definitely a foundational part of toolkit.
Jordi Mon CompanysBy virtue of using LLVM, Mono has access to a wide suite of tools and optimization backends. A lot of active research uses LLVM IR. One such research project, Souper, tries to brute-force a search for missed optimizations in our emitted code. The .NET community may have software projects that benefit from using Souper directly to generate code, rather than waiting for us to find ways to automate those optimizations ourselves. This algorithm can generate code that would be very challenging for a traditional compiler to find.
The Mono .NET VM is a rather nimble beast. Rather than requiring all users to live with the performance characteristics of a given policy, we often choose to create multiple backends and bindings that exploit what’s best of the native platform while presenting a common interface. Part of this is the choice between using an interpreter, a Just-In-Time compiler, or an Ahead-Of-Time compiler.
AOT compilation is attractive to some projects for the combination of optimized code with low start-up time. This is the classic advantage of native code over code from a JIT or interpreter. AOT code is often much worse than code from a JIT because of a need for indirection in code that references objects in run-time memory. It’s important for AOT code to exploit every possible optimization to make up for this disadvantage. For this, we increasingly rely on optimizations performed by LLVM.
LLVM’s optimization passes can analyze a program globally. It is able to see through layers of abstractions and identify repeated or needless operations in a program’s global flow. Likewise, it can examine the operations in a small segment of code and make them perfect with respect to one another. Sometimes though, we fail to optimize code. Classic compilers work by analyzing the control-flow and dataflow of a program and matching on specific patterns such as stores to variables that aren’t used later and constants that are stored to variables rather than being propagated everywhere they can be. If the pattern matches, the transformation can take place. Sometimes the code we feed into LLVM does not match the patterns of inefficiency that it looks for, and we don’t get an optimization.
What’s worse is that we don’t know that we hit this optimization blocker. We don’t know what we expect from code until it’s a problem and we’re really spending time optimizing it. Spotting trends in generated machine code across thousands of methods is incredibly labor intensive. Often only really bad code that runs many many times will catch attention. Fixing every single missed optimization and finding every single missed optimization becomes a chicken-and-egg problem.
The solution to some manifestations of this problem is the use of superoptimizers. The academic discipline of superoptimizers is very old. The idea is to treat the code that was written as more of a restriction, a specification. The superoptimizer generates a ton of native code and checks the ways in which it behaves differently than the written code. If it can generate a faster native code sequence than the compiler generated while keeping behavior exactly the same, it wins.
This “exactly the same” part can be incredibly expensive if not done correctly. The computational effort involved has historically kept superoptimization from being used very often. Since then, it has gotten a lot easier to run computationally intensive jobs. Computer hardware has become orders of magnitude more powerful. Theorems around equivalence checking and control-flow representations made more powerful claims and used algorithms with better running times. We are therefore seeing superoptimization research reemerge at this time.
One superoptimizer in particular, named Souper, has reached maturity while interoperating with the industry standard code generator (LLVM) and the industry standard SMT engine (Z3). It has kickstarted a renewed faith in researchers that superoptimization is a reasonable policy. It can take the LLVM IR that a compiler was going to feed into LLVM, and compute better IR. This can sometimes take a lot of time, and the code emitted is the result of a process that isn’t auditable. The pipeline is placing total faith in Souper for the correctness of generated code.
It’s mostly useful for compiler engineers to use to tell that optimizations were missed, and to identify how to fix that using conventional pattern matching over the program’s control-flow and dataflow graphs. That said, Souper offers the ability to drop in for clang and to generate the code that is run. Some projects are eager to make any trade-offs for performance that are acceptable. Other projects may want to get a feel for how fast they could run if they were to invest making sure Mono generates good code. If the compile time increase doesn’t discourage them, many projects may find some benefit in such an optimizing compiler.
I recommend that curious readers install Z3, get a checkout of
https://github.com/google/souper,
and complete the compilation process described in that documentation.
When AOTing code with Mono, they’re going to want to pass the commandline flags named there into the ---aot=llvmopts=
argument.
As of the time of this writing, that is
llvmopts="-load /path/to/libsouperPass.so -souper -z3-path=/usr/bin/z3"
Mono will then allow Souper to step in during the middle of the LLVM compilation and try it’s best at brute-forcing some better code. If there’s anything short and fast that does the job better, it will be found.
It is frankly amazing that Mono can get such extensive optimizations simply by compiling to LLVM IR. Without changing a single line of Mono’s source, we changed our compilation pipeline in truly dramatic ways. This shows off the lack of expectations that Mono has about the layout of our generated code. This shows off the flexibility of LLVM as a code generation framework and to Mono as an embedded runtime. Embedders using Mono should consider using our LLVM backend with this and other third-party LLVM optimization passes. Feedback about the impact of our research on real-world programs will help us decide what we should be using by default.
Alexander KyteDuring the 2018 Microsoft Hack Week, members of the Mono team explored the idea of replacing the Mono’s code generation engine written in C with a code generation engine written in C#.
In this blog post we describe our motivation, the interface between the native Mono runtime and the managed compiler and how we implemented the new managed compiler in C#.
Mono’s runtime and JIT compiler are entirely written in C, a highly portable language that has served the project well. Yet, we feel jealous of our own users that get to write code in a high-level language and enjoy the safety, the luxury and reap the benefits of writing code in a high-level language, while the Mono runtime continues to be written in C.
We decided to explore whether we could make Mono’s compilation engine pluggable and then plug a code generator written entirely in C#. If this were to work, we could more easily prototype, write new optimizations and make it simpler for developers to safely try changes in the JIT.
This idea has been explored by research projects like the JikesRVM, Maxime and Graal for Java. In the .NET world, the Unity team wrote an IL compiler to C++ compiler called il2cpp. They also experimented with a managed JIT recently.
In this blog post, we discuss the prototype that we built. The code mentioned in this blog post can be found here: https://github.com/lambdageek/mono/tree/mjit/mcs/class/Mono.Compiler
The Mono runtime provides various services, just-in-time compilation, assembly loading, an IO interface, thread management and debugging capabilities. The code generation engine in Mono is called mini
and is used both for static compilation and just-in-time compilation.
Mono’s code generation has a number of dimensions:
- Code can be either interpreted, or compiled to native code
- When compiling to native code, this can be done just-in-time, or it can be batch compiled, also known as ahead-of-time compilation.
- Mono today has two code generators, the light and fast
mini
JIT engine, and the heavy duty engine based on the LLVM optimizing compiler. These two are not really completely unaware of the other, Mono’s LLVM support reuses many parts of themini
engine.
This project started with a desire to make this division even more clear, and to swap up the native code generation engine in ‘mini’ with one that could be completely implemented in a .NET language. In our prototype we used C#, but other languages like F# or IronPython could be used as well.
To move the JIT to the managed world, we introduced the ICompiler
interface which must be implemented by your compilation engine, and it is invoked on demand when a specific method needs to be compiled.
This is the interface that you must implement:
interface ICompiler { CompilationResult CompileMethod (IRuntimeInformation runtimeInfo, MethodInfo methodInfo, CompilationFlags flags, out NativeCodeHandle nativeCode); string Name { get; } }
The CompileMethod ()
receives a IRuntimeInformation
reference, which provides services for the compiler as well as a MethodInfo
that represents the method to be compiled and it is expected to set the nativeCode
parameter to the generated code information.
The NativeCodeHandle
merely represents the generated code address and its length.
This is the IRuntimeInformation
definition, which shows the methods available to the CompileMethod
to perform its work:
interface IRuntimeInformation { InstalledRuntimeCode InstallCompilationResult (CompilationResult result, MethodInfo methodInfo, NativeCodeHandle codeHandle); object ExecuteInstalledMethod (InstalledRuntimeCode irc, params object[] args); ClassInfo GetClassInfoFor (string className); MethodInfo GetMethodInfoFor (ClassInfo classInfo, string methodName); FieldInfo GetFieldInfoForToken (MethodInfo mi, int token); IntPtr ComputeFieldAddress (FieldInfo fi); /// For a given array type, get the offset of the vector relative to the base address. uint GetArrayBaseOffset(ClrType type); }
We currently have one implementation of ICompiler
, we call it the the “BigStep
” compiler. When wired up, this is what the process looks like when we compile a method with it:
The mini
runtime can call into managed code via CompileMethod
upon a compilation request. For the code generator to do its work, it needs to obtain some information about the current environment. This information is surfaced by the IRuntimeInformation
interface. Once the compilation is done, it will return a blob of native instructions to the runtime. The returned code is then “installed” in your application.
Now there is a trick question: Who is going to compile the compiler?
The compiler written in C# is initially executed with one of the built-in engines (either the interpreter, or the JIT engine).
Our first ICompiler
implementation is called the BigStep compiler.
This compiler was designed and implemented by a developer (Ming Zhou) not affiliated with Mono Runtime Team. It is a perfect showcase of how the work we presented through this project can quickly enable a third-party to build their own compiler without much hassle interacting with the runtime internals.
The BigStep compiler implements an IL to LLVM compiler. This was convenient to build the proof of concept and ensure that the design was sound, while delegating all the hard compilation work to the LLVM compiler engine.
A lot can be said when it comes to the design and architecture of a compiler, but our main point here is to emphasize how easy it can be, with what we have just introduced to Mono runtime, to bridge IL code with a customized backend.
The IL code is streamed into to the compiler interface through an iterator, with information such as op-code, index and parameters immediately available to the user. See below for more details about the prototype.
Another beauty of moving parts of the runtime to the managed side is that we can test the JIT compiler without recompiling the native runtime, so essentially developing a normal C# application.
The InstallCompilationResult ()
can be used to register compiled method with the runtime and the ExecuteInstalledMethod ()
are can be used to invoke a method with the provided arguments.
Here is an example how this is used code:
public static int AddMethod (int a, int b) { return a + b; } [Test] public void TestAddMethod () { ClassInfo ci = runtimeInfo.GetClassInfoFor (typeof (ICompilerTests).AssemblyQualifiedName); MethodInfo mi = runtimeInfo.GetMethodInfoFor (ci, "AddMethod"); NativeCodeHandle nativeCode; CompilationResult result = compiler.CompileMethod (runtimeInfo, mi, CompilationFlags.None, out nativeCode); InstalledRuntimeCode irc = runtimeInfo.InstallCompilationResult (result, mi, nativeCode); int addition = (int) runtimeInfo.ExecuteInstalledMethod (irc, 1, 2); Assert.AreEqual (addition, 3); }
We can ask the host VM for the actual result, assuming it’s our gold standard:
int mjitResult = (int) runtimeInfo.ExecuteInstalledMethod (irc, 666, 1337); int hostedResult = AddMethod (666, 1337); Assert.AreEqual (mjitResult, hostedResult);
This eases development of a compiler tremendously.
We don’t need to eat our own dog food during debugging, but when we feel ready we can flip a switch and use the compiler as our system compiler. This is actually what happens if you run make -C mcs/class/Mono.Compiler run-test
in the mjit branch: We use this API to test the managed compiler while running on the regular Mini JIT.
As part of this effort, we also wrapped Mono’s JIT in the ICompiler
interface.
MiniCompiler
calls back into native code and invokes the regular Mini JIT. It works surprisingly well, however there is a caveat: Once back in the native world, the Mini JIT doesn’t need to go through IRuntimeInformation
and just uses its old ways to retrieve runtime details. Though, we can turn this into an incremental process now: We can identify those parts, add them to IRuntimeInformation
and change Mini JIT so that it uses the new API.
We strongly believe in a long-term value of this project. A code base in managed code is more approachable for developers and thus easier to extend and maintain. Even if we never see this work upstream, it helped us to better understand the boundary between runtime and JIT compiler, and who knows, it might will help us to integrate RyuJIT into Mono one day 😉
We should also note that IRuntimeInformation
can be implemented by any other .NET VM: Hello CoreCLR
folks 👋
If you are curious about this project, ping us on our Gitter channel.
Since the target language was LLVM IR, we had to build a translator that converted the stack-based operations from IL into the register-based operations of LLVM.
Since many potential target are register based, we decided to design a framework to make it reusable of the part where we interpret the IL logic. To this goal, we implemented an engine to turn the stack-based operations into the register operations.
Consider the ADD
operation in IL. This operation pops two operands from the stack, performing addition and pushing back the result to the stack. This is documented in ECMA 335 as follows:
Stack Transition: ..., value1, value2 -> ..., result
The actual kind of addition that is performed depends on the types of the values in the stack. If the values are integers, the addition is an integer addition. If the values are floating point values, then the operation is a floating point addition.
To re-interpret this in a register-based semantics, we treat each pushed frame in the stack as a different temporary value. This means if a frame is popped out and a new one comes in, although it has the same stack depth as the previous one, it’s a new temporary value.
Each temporary value is assigned a unique name. Then an IL instruction can be unambiguously presented in a form using temporary names instead of stack changes. For example, the ADD
operation becomes
Temp3 := ADD Temp1 Temp2
Other than coming from the stack, there are other sources of data during evaluation: local variables, arguments, constants and instruction offsets (used for branching). These sources are typed differently from the stack temporaries, so that the downstream processor (to talk in a few) can properly map them into their context.
A third problem that might be common among those target languages is the jumping target for branching operations. IL’s branching operation assumes an implicit target should the result be taken: The next instruction. But branching operations in LLVM IR must explicitly declare the targets for both taken and not-taken paths. To make this possible, the engine performs a pre-pass before the actual execution, during which it gathers all the explicit and implicit targets. In the actual execution, it will emit branching instructions with both targets.
As we mentioned earlier, the execution engine is a common layer that merely translates the instruction to a more generic form. It then sends out each instruction to IOperationProcessor
, an interface that performs actual translation. Comparing to the instruction received from ICompiler
, the presentation here, OperationInfo
, is much more consumable: In addition to the op codes, it has an array of the input operands, and a result operand:
public class OperationInfo { ... ... internal IOperand[] Operands { get; set; } internal TempOperand Result { get; set; } ... ... }
There are several types of the operands: ArgumentOperand
, LocalOperand
, ConstOperand
, TempOperand
, BranchTargetOperand
, etc. Note that the result, if it exists, is always a TempOperand
. The most important property on IOperand
is its Name
, which unambiguously defines the source of data in the IL runtime. If an operand with the same name comes in another operation, it unquestionably tells us the very same data address is targeted again. It’s paramount to the processor to accurately map each name to its own storage.
The processor handles each operand according to its type. For example, if it’s an argument operand, we might consider retrieving the value from the corresponding argument. An x86 processor may map this to a register. In the case of LLVM, we simply go to fetch it from a named value that is pre-allocated at the beginning of method construction. The resolution strategy is similar for other operands:
LocalOperand
: fetch the value from pre-allocated addressConstOperand
: use the const value carried by the operandBranchTargetOperand
: use the index carried by the operand
Since the temp value uniquely represents an expression stack frame from CLR runtime, it will be mapped to a register. Luckily for us, LLVM allows infinite number of registers, so we simply name a new one for each different temp operand. If a temp operand is reused, however, the very same register must as well.
We use LLVMSharp binding to communicate with LLVM.
Ludovic Henry, Miguel de Icaza, Aleksey Kliger, Bernhard Urban and Ming ZhouNote: This is a guest post by Calvin Buckley (@NattyNarwhal on GitHub) introducing the community port of Mono to IBM AIX and IBM i. If you’d like to help with this community port please contact the maintainers on Gitter.
You might have noticed this in the Mono 5.12 release notes, Mono now includes support for IBM AIX and IBM i; two very different yet (mostly!) compatible operating systems. This post should serve as an introduction to this port.
Porting Mono to a new operating system is not as hard as you might think! Pretty much the entire world is POSIX compliant these days, and Mono is a large yet manageable codebase due to a low number of dependencies, use of plain C99, and an emphasis on portability. Most common processor architectures in use are supported by the code generator, though more obscure ISAs will have some caveats.
Pretty much all of the work you do will be twiddling #ifdefs
to accommodate for the target platform’s quirks; such as missing or different preprocessor definitions and functions, adding the platform to definitions so it is supported by core functionality, and occasionally having to tweak the runtime or build system to handle when the system does something completely differently than others. In the case of AIX and IBM i, I had to do all of these things.
For some background on what needed to happen, we can start by giving some background on our target platforms.
Both of our targets run on 64-bit PowerPC processors in big endian mode. Mono does support PowerPC, and Bernhard Urban maintains it. What is odd about the calling conventions on AIX (shared occasionally by Linux) is the use of function descriptors, which means that pointers to functions do not point to code, but instead point to metadata about them. This can cause bugs in the JIT if you are not careful to consume or produce function descriptors instead of raw pointers when needed. Because the runtime is better tested on 64-bit PowerPC, and machines are fast enough that the extra overhead is not significant, we always build a 64-bit runtime.
In addition to a strange calling convention, AIX also has a different binary format - that means that currently, the ahead-of-time compiler does not work. While most Unix-like operating systems use ELF, AIX (and by extension, IBM i for the purposes of this port) use XCOFF, a subset of the Windows PE binary format.
AIX is a Unix (descended from the System V rather than the BSD side of the family) that runs on PowerPC systems. Despite being a Unix, it has some quirks of its own, that I will describe in this article.
IBM i (formerly known as i5/OS or OS/400) is decidedly not a Unix. Unlike Unix, it has an object-based filesystem where all objects are mapped into a single humongous address space, backed on disk known as single level storage – real main storage (RAM) holds pages of objects “in use” and acts as a cache for objects that reside permanently on disk. Instead of flat files, IBM i uses database tables as the means to store data. (On IBM i, all files are database tables, and a file is just one of the “object types” supported by IBM i; others include libraries and programs.) Programs on IBM i are not simple native binaries, but instead are “encapsulated” objects that contain an intermediate form, called Machine Interface instructions, (similar to MSIL/CIL) that is then translated and optimized ahead-of-time for the native hardware (or upon first use); this also provides part of the security model and has allowed users to transition from custom CISC CPUs to enhanced PowerPC variants, without having to recompile their programs from the original source code.
This sounds similar to running inside of WebAssembly rather than any kind of Unix – So, then, how do you port programs dependent on POSIX? IBM i provides an environment called PASE (Portable Application Solutions Environment) that provides binary compatibility for AIX executables, for a large subset of the AIX ABI, within the IBM i. But Unix and IBM i are totally different; Unix has files and per-process address spaces, and IBM i normally does not, so how do you make these incongruent systems work?
To try to bridge the gap, IBM i also has an “Integrated File System” that supports byte-stream file objects in a true hierarchical file system directory hierarchy. For running Unix programs that expect their own address space, IBM i provides something called “teraspace” that provides a large private address space per process or job. This requires IBM i to completely changes the MMU mode and does a cache/TLB flush every time it enters and exits the Unix world, making system calls somewhat expensive; in particular, forking and I/O. While some system calls are not implemented, there are more than enough to port non-trivial AIX programs to the PASE environment, even with its quirks and performance limitations. You could even build them entirely inside of the PASE environment.
A port to the native IBM i environment outputting MI code with the ahead of time compiler has been considered, but would take a lot of work to write an MI backend for the JIT, use the native APIs in the runtime, and handle how the environment is different from anything else Mono runs on. As such, I instead PASE and AIX for the ease of porting existing POSIX compatible code.
The port came out of some IBM i users expressing an interest in wanting to run .NET programs on their systems. A friend of mine involved in the IBM i community had noticed I was working on a (mostly complete, but not fully working) Haiku port, and approached me to see if it could be done. Considering that that I now had experience with porting Mono to new platforms, and there was already a PowerPC JIT, I decided to take the challenge.
The primary porting target was IBM i, with AIX support being a by-product. Starting by building on IBM i, I set up a chroot environment to work in, (chroot support was added to PASE fairly recently), setting up a toolchain with AIX packages. Initial bring-up of the port happened on IBM i, up to the point where the runtime was built, but execution of generated code was not happening. One problem with building on IBM i, however, is that the performance limitations really start to show. While building took the same amount of time on the system I had access to (dual POWER6, taking about roughly 30 minutes to build the runtime) as AIX due to it mostly being computation, the configure script was extremely impacted due to its emphasis on many small reads and writes with lots of forking. Whereas it took AIX 5 minutes and Linux 2 minutes to run through the configure script, it took IBM i well over an hour to run through all of it. (Ouch!)
At this point, I submitted the initial branch as a pull request for review. A lot of back and forth went on to work on the underlying bugs as well as following proper style and practices for Mono. I set up an AIX VM on the machine, and switched to cross-compiling from AIX to IBM i; targeting both platforms with the same source and binary. Because I was not building on IBM i any longer, I had to periodically copy binaries over to IBM i, to check if Mono was using missing libc functions or system calls, or if I had tripped on some behaviour that PASE exhibits differently from AIX. With the improved iteration time, I could start working on the actual porting work much more quickly.
To help with matters where I was unsure exactly how AIX worked, David Edelsohn from IBM helped by explaining how AIX handles things like calling conventions, libraries, issues with GCC, and best practices for dealing with porting things to AIX.
There are some unique aspects of AIX and the subset that PASE provides, beyond the usual #ifdef
handling.
One annoyance I had was how poor the GNU tools are on AIX. GNU binutils are effectively useless on AIX, so I had to explicitly use IBM’s binutils, and deal with some small problems related to autotools with environment variables and assumption of GNU ld features in makefiles. I had also dealt with some issues in older versions of GCC (which is actually fairly well supported on AIX, all things considered) that made me upgrade to a newer version. However, GCC’s “fixincludes” tool to try to mend GCC compatibility issues in system header files in fact mangled them, causing them to be missing some definitions found in libraries. (Sometimes they were in libc, but never defined in the headers in the first place!)
Improper use of function pointers was sometimes a problem. Based on the advice of Bernhard, there was a problem with the function descriptors #ifdefs
, which had caused a mix-up interpreting function pointers as code. Once that had been fixed, Mono was running generated code on AIX for the first time – quite a sight to behold!
One particularly nerve-racking issue that bugged me while trying to bootstrap was with the Decimal type returning a completely bogus value when dividing, causing a non-sense overflow condition. Because of constant inlining, this occurred when building the BCL, so it was hard to put off. With some careful debugging from my friend, comparing the variable state between x86 and PPC when dividing a decimal, we had determined exactly where the incorrect endianness handling had taken place and I had came up with a fix.
While Mono has historically handled different endianness just fine, Mono has started to replace portions of its own home-grown BCL with CoreFX, (the open-source Microsoft BCL) and it did not have the same rigor towards endianness issues. Mono does patch CoreFX code, but it sometimes pulls in new code that has not had endianness (or other such possible compatibility issues) worked out yet and thus requires further patching. In this case, the code had already been fixed for big endian before, but pulling in updated code from CoreFX had created a new problem with endianness.
On AIX, there are two ways to handle libraries. One is your typical System V style linking with .so libraries; this isn’t used by default, but can be forced. The other way is the “native” way to do it, where objects are stored in an archive (.a) typically used for holding objects used for static linking. Because AIX always uses position-independent code, multiple objects are combined into a single object and then inserted into the archive. You can then access the library like normal. Using this technique, you can even fit multiple shared objects of the same version into a single archive! This took only minimal changes to support; I only had to adjust the dynamic library loader to tell it to look inside of archive files, and some build system tweaks to point it to the proper archive and objects to look for. (Annoyingly, we have to hardcode some version names of library objects. Even then, the build system still needs revision for cases when it assumes that library names are just the name and an extension.)
When Mono tries to access an object reference, and the reference (a pointer) is null, (that is, zero) Mono does not normally check to see if the pointer is null. On most operating systems, when a process accesses invalid memory such as a null pointer, it sends the process a signal (such as SIGSEGV) and if the program does not handle that signal, it will terminate the program. Normally, Mono registers a signal handler, and instead of checking for null, it would just try to dereference a null pointer anyways to let the signal handler interrupt and return an exception to managed code instead. AIX doesn’t do that – it lets programs dereference null pointers anyway! What gives?
Accessing memory via a null pointer is not actually defined by the ANSI C standards – this is a case of a dreaded undefined behaviour. Mono relied on the assumption that most operating systems did it in the typical way of sending a signal to the process. What AIX instead does is to implement a “null page” mapped at 0x0 and accepts reads and writes to it. (You could also execute from it, but since all zeroes is an invalid opcode on PowerPC, this does not do much but throw an illegal instruction signal at the process.) This is a historical decision, relating back to code optimizations implemented in older IBM compilers made where they used speculative execution in compiler-generated code during the 1980s for improved performance when evaluating complex logical expressions. Because we cannot rely on handling a signal to catch the null dereference, we can instead force the behaviour to always check if pointers are null, (normally reserved for runtime debugging) to be on all the time.
BoringSSL is required to get modern TLS required by newer websites. The build system, instead of autotools and make, is CMake based. Luckily, this worked fine on AIX, though I had to apply some massaging for it to do 64-bit library mangling. For a while, I was stumped by an illegal instruction error, that turned out to be due to not linking in pthread to the library, and it not warning about it.
It turns out that even though BoringSSL was now working, one cipher suite (secp256r1) was not, so sites using that cipher were broken. To try to test it, I had gone “yak shaving” to build what was needed for the test harness according to the README; Ninja and Go. I had a heck of a time trying to build Go on a PPC Linux system to triage, but as it turned out, I did not actually need it anyway – Mono had tweaked the build system so that it was not needed after all; I just had to flip a CMake flag to let it build the tests and run them manually. After figuring out what exactly was wrong, it turned out to be an endianness issue in an optimized path. A fix was attempted for it, but in the end, only disabling it worked and let the cipher run fine on big endian PowerPC. Since the code came from Google code that has been rewritten in both BoringSSL and OpenSSL upstream’s latest sources, it is due to be replaced the next time Mono’s BoringSSL fork gets updated.
I had an issue with I/O getting some spurious and strange issues with threading. Threads would complain that they had an unexpected errno of 0. (indicating success) What happened was that AIX does not assume that all programs are thread-safe by default, so errno was not thread-local. One small #define
later, and that was fixed. (Miguel de Icaza was amused that some operating systems still consider thread safety to be an advanced feature. 🙂)
We also found a cosmetic issue with uname. Most Unices put their version in the release field of the uname structure, and things like the kernel type in the version field. AIX and PASE however, put the major version in the version field, and the minor version in the release field. A simple sprintf
for the AIX case was enough to fix this.
PASE has many quirks – this necessitated some patches to work around deficiencies; from bugs to unimplemented functions. I aim to target IBM i 7.1 or newer, so I worked around some bugs that have been fixed in newer versions. A lot of this I cleaned up with some more preprocessor definitions.
Now that Mono runs on these platforms, there’s still a lot of work left to be done. The ahead of time compiler needs to be reworked to emit XCOFF-compatible code, libgdiplus needs to be ported, Roslyn is broken on ppc64be, continuous integration would be useful to detect build failures, the build system is still a bit weird regarding AIX libraries, and plenty more where that came from. Despite all this, the fact the port works well enough already in its current state should provide a solid foundation to work with, going forward.
Calvin BuckleyAs you may know we have been working on bringing Mono to the WebAssembly platform. As part of the effort we have been pursuing two strategies; one that uses the new Mono IL interpreter to run managed code at runtime, and one that uses full static (AOT) compilation to create one .wasm
file that can be executed natively by the browser.
We intend the former to be used for quickly reloading C# code and prototyping and the latter for publishing your final application, with all the optimizations enabled. The interpreter work has now been integrated into Mono’s source code and we are using it to develop, port and tune the managed libraries to work on WebAssembly.
This post is about the progress that we have been making on doing static compilation of .NET code to run on WebAssembly.
WebAssembly static compilation in Mono is orchestrated with the mono-wasm
command-line tool. This program takes IL assemblies as input and generates a series of files in an output directory, notably an index.wasm
file containing the WebAssembly code for your assemblies as well as all other dependencies (the Mono runtime, the C library and the mscorlib.dll
library).
$ cat hello.cs class Hello { static int Main(string[] args) { System.Console.WriteLine("hello world!"); return 0; } } $ mcs -nostdlib -noconfig -r:../../dist/lib/mscorlib.dll hello.cs -out:hello.exe $ mono-wasm -i hello.exe -o output $ ls output hello.exe index.html index.js index.wasm mscorlib.dll
mono-wasm
uses a version of the Mono compiler that, given C# assemblies, generates LLVM bitcode suitable to be passed to the LLVM WebAssembly backend. Similarly, we have been building the Mono runtime and a C library with a version of clang
that also generates LLVM WebAssembly bitcode.
Until recently, mono-wasm
was linking all the bitcode into a single LLVM module then performing the WebAssembly code generation on it. While this created a functional .wasm
file, this had the downside of taking a significant amount of time (half a minute on a recent MacBook Pro) every time we were building a project as a lot of code was in play. Some of the code, the runtime bits and the mscorlib.dll
library, never changed and yet were still being processed for WebAssembly code generation every time.
We were thrilled to hear in late November of last year that the LLVM linker (lld
) was getting WebAssembly support.
Since then, we changed our mono-wasm
tool to perform incremental compilation of project dependencies into separate .wasm
files, and we integrated lld
’s new WebAssembly driver in the tool. Thanks to this approach, we now perform WebAssembly code generation only when required, and in our testing builds now complete in less than a second once the dependencies (runtime bits and mscorlib.dll
) have already been compiled into WebAssembly.
Additionally, mono-wasm
used to use the LLVM WebAssembly target to create source files that would then be passed to the Binaryen toolchain to create the .wasm
code. We have been testing the backend’s ability to generate .wasm
object files directly (with the wasm32-unknown-unknown-wasm
triple) and so far it seems promising enough that we changed mono-wasm
accordingly. We also noticed a slight decrease in build time.
Old toolchain | New toolchain (First Compile) | New toolchain (Rebuild) | |
---|---|---|---|
Full application build | ~40s | ~30s | <1s |
Hello World program | ~40s | <1s | <1s |
There is still a lot of work to do on bringing C# to WebAssembly, but we are happy with this new approach and the progresses we are making. Feel free to watch this space for further updates. You can also track the work on the mono-wasm GitHub repository.
For those of you that want to take this for a spin you can download a preview release, unzip and run “make” in the samples. This currently requires MacOS High Sierra to run.
Laurent SansonettiMono is complementing its Just-in-Time compiler and its static compiler with a .NET interpreter allowing a few new ways of running your code.
In 2001 when the Mono project started, we wrote an interpreter for the .NET instruction set and we used this to bootstrap a self-hosted .NET development environment on Linux.
At the time we considered the interpreter a temporary tool that we could use while we built a Just-in-Time (JIT) compiler. The interpreter (mint
) and the JIT engine (mono
) existed side-by-side until we could port the JIT engine to all the platforms that we supported.
When generics were introduced, the engineering cost of keeping both the interpreter and the JIT engine was not worth it, and we did not see much value in the extra work to keep it around, so we removed the interpreter.
We later introduced full static compilation of .NET code. This is a technology that we introduced to target platforms that do not allow for dynamic code generation. iOS was the main driver for this, but it opened the doors to allow Mono to run on gaming consoles like the PlayStation and the Xbox.
The main downside of full static compilation is that a completely new executable has to be recreated every time that you update your code. This is a slow process and one that was not suitable for interactive development that is practiced by some.
For example, some game developers like to adjust and tweak their game code, without having to trigger a full recompilation. The static compilation makes this scenario impractical, so they resort to embedding a scripting language into their game code to quickly iterate and tune their projects.
This lack of .NET dynamic capabilities also prevented many interesting uses of .NET as a teaching or prototyping tool in these environments. Things like Xamarin Workbooks, or simple scripting could not use .NET languages and had to resort to other solutions on these platforms.
Frank Krueger, while building his Continuous IDE, needed such environment on iOS so much that he wrote his own .NET interpreter using F# to bring his vision of having a complete development environment for .NET on the iPad.
To address these issues, and to support some internal Microsoft products, we brought Mono’s interpreter back to life, and it is back with a twist.
We resuscitated Mono’s old interpreter and upgraded its .NET support, adding the support for generics and upgraded it to run .NET as it exists in 2017. Next is adding support for mixed-mode execution.
It is one of the ways that Mono runs on WebAssembly today for example (the other being the static compilation using LLVM)
The interpreter is now part of mainline Mono and it passes a large part of our extensive test suites, you can use it today when building Mono from source code, like this:
$ mono --interpreter yourassembly.exe ...
While the interpreter alone is now in great shape, we are currently working on a configuration that will allow us to mix both interpreted code with statically compiled code or Just-in-Time compiled code, we call this mixed mode execution.
For platforms like iOS, PlayStation and Xbox, this means that you can precompile your core libraries or core application, and still support loading and executing code dynamically. Gaining the benefits of having all your core libraries optimized with LLVM, but still have the flexibility of running some dynamic code.
This will allow game developers to prototype, experiment and tweak their games using .NET languages on their system without having to recompile their applications.
It will open the doors for scriptable applications on device using .NET languages as well.
We are extending the capabilities of the interpreter to handle various interesting scenarios. These are some of the projects ahead of us:
The full ahead-of-time compilation versions of Mono (iOS, Consoles) do not ship with an implementation of System.Reflection.Emit
. This made sense as the capability could not be supported, but now that we have an interpreter, we can.
There are several uses for this.
The System.Linq.Expressions
API which is used extensively by many advanced scenarios like Entity Framework or by users leveraging the C# compiler to parse expressions into expression trees, you have probably seen the code in scenarios like this:
Expression sum = a + b; var adder = sum.Compile (); adder ();
In Full AOT scenarios, the way that we made Entity Framework and the above work was to ship an interpreter for the above Expression
class. This expression interpreter has limitations, and is also a large one.
By enabling System.Reflection.Emit
powered by the interpreter we can remove a lot of code.
This will also allow the scripting languages that have been built for .NET to work on statically compiled environments, like IronPython, IronRuby and IronScheme.
To allow this, we are completing the work for mixed-mode execution. That means that the interpreted code complements existing statically compiled .NET code.
Earlier on this post, I mentioned that one of the idioms that we previously failed to address was the hot-reloading of code by developers that deployed their app and tweaked their game code (or their code for that matter) live.
We are completing our support for AppDomains to enable this scenario.
The interpreter is a lighter option to run some code. We found that certain programs can run faster by being interpreted than being executed with the JIT engine.
We intend to explore a mixed mode of execution, sometimes called tiered compilation.
We could instruct the interpreter to execute code that is known to not be performance sensitive - for example, static constructors or other initialization code that only runs once to reduce both memory usage, generated code usage and execution time.
Another consideration is to run code in interpreted mode, and if we exceed some threshold switch to a JIT compiled implementation of the method, or use attributes to annotate methods that are worth the trouble and methods that are not worth the trouble optimizing.
Miguel de IcazaThis Summer of Code, the Mono project had many exciting submissions. It’s been great to see what our applicants have been able to accomplish. Some were very familiar with the codebases they worked on, while others had to learn quickly. Let’s summarize how they spent this summer.
Mohit Mohta and Kimon Topouzidis chose to address a number of bugs and add features to the code of CppSharp. Std::string was added, stacks were fixed, options were added, structure packing was added, and primitive types support was improved. They both seem to have learned a lot about the workflow of methodical debugging of systems code.
Many software bugs don’t result in immediate errors and crashes. Some corrupt program state in such a way that a cryptic error is seen much later. In the worst case, each such delayed crash may have a different stack trace. Many of these bugs have root causes that can be spotted in a running program the second they go wrong. The tooling to do so has only recently been able to spot race conditions, which can be some of the worst of these bugs. Clang has integrated a number of such sanitizers.
Armin Hasitzka chose to use clang’s runtime sanitizers for race conditions and for memory safety to automatically catch Mono bugs. In his efforts, he ran into false positives and legitimate bugs alike. He fixed a number of bugs, helped silence false positives, and left behind infrastructure to automatically catch regressions as they appear.
Dimitar Dobrev is familiar to the Mono project. He has done the Google Summer of Code with Mono in 2015, and has helped maintain CppSharp since.
This summer, he sought to commit his time to developing the Qt bindings further. In the development of CppSharp, the problem of mapping C# types onto C++ generics arose. There were many potential solutions, but very few retained the feeling of the underlying API. After some experimentation, the hard problems were solved.
As the summer came to an end, he fixed the minor issues that arose during tests of QtSharp. The burden of maintaining the project and responding to bugs from the community did not stop for Dimiar, resulting in partial completion of milestones yet significant overall contribution. Development of QtSharp proceeds alongside his ongoing maintenance work and contributions.
The CBinding extension for MonoDevelop adds a lot of great functionality for working with C and C++ projects. It is still a work in progress, and Anubhav Singh wanted to add some more functionality. He focused on bringing support for Windows compilers and for CMake. He also chose this moment to update the extension to reflect the newer APIs of MonoDevelop. In the process, he had to begin the process of upstreaming some changes to MonoDevelop.
Something often mentioned around a warm laptop with spinning fans is how nice C developers have it. CCache enables someone to recompile large C projects after minor modifications in a very small amount of time. Going beyond the build system skipping recompilation, the system compiler is wrapped by a program that spits back the old output in a fraction of the time that a compiler takes. This is a trick that managed languages haven’t learned until now.
Daniel Calancea created a tool which wraps mcs and understands the commands sent to it. If it is invoked with the same files and the same options twice, it checks that all of the hashes of all of the files are the same between runs. If so, it returns the output of the C# compiler the first time. Equally important is that this tool will return the same return codes as the first run, and will integrate as seamlessly into any build system as ccache does. It even reports the same warnings that the initial compiler did.
Daniel published this tool for Windows and Linux to Nuget.
Mono’s implementation of System.IO.Pipes has historically not had some features available to the CLR. After msbuild was made open source, users found that Mono unfortunately could not build in parallel because of the API differences. CoreFX brought with it the promise of a System.IO.Pipes.PipeStream that would enable parallel msbuild. CoreFX’s API surface was not strictly a superset of Mono’s though. Mono implemented a couple of endpoints that CoreFX did not, and we used those endpoints in other places in the BCL.
Georgios Athanasopoulos chose to do the work required to make Mono work with CoreFX’s PipeStream. Modifying both CoreFX and Mono was required. Mono’s build system had to choose to use the new implementation files, rather than looking for them in the BCL directory. His work was a success. Finishing early, he chose to experimentally enable a parallel msbuild and test it. Things seem to be mostly working.
Often when debugging C# code in the middle of a large project, it’s important to invoke code to understand how variables are behaving in a segment of code. Sometimes, the code that one wishes to invoke hasn’t been written yet. The developer is left squinting at variables, invoking existing methods, and manually running code in their head. Much better would be to enable the developer to write a new function and invoke it on the variables in question. Interpreted languages offer support for this without much trouble usually because code doesn’t have as much metadata associated with it, and because they have integrated compilers for the debugged languages.
This summer, Haruka Matsumoto worked on a system that enables developers to use these arbitrary code snippets entered into the debugger. Mono runs the debugger and the debuggee in separate running instances of the runtime. As the running mono runtime for the application being debugged doesn’t have access to a C# compiler, this code has to be compiled by the debugger. The debugger uses Roslyn to compile the code segments, and this assembly is sent to the debugged application’s runtime.
This is made more difficult by the fact that the debugger is trying to run a Lambda that has access to the variables and methods defined in the functions the debugger is currently debugging. Shorter method names need to resolve to what they would if the original function had used them, and variables should be accessible by name. Issues with private types are potentially unsolvable without special casing, as mono prevents arbitrary code from modifying private fields. Haruka handled these and other difficult considerations, and delivered a very strong prototype of Lambda support in the integrated runtime debugger. It should be immediately useful for anybody who spends a lot of time using mono to debug C# code.
It is often the case that small differences in the implementations of core runtime functions can result in perceived bugs introduced by switching runtimes. The differences are due to depending on API behavior that may not be entirely defined by the specification, but works in a certain case on a certain machine. This sensitivity is nowhere more baffling to debug than around threading and synchronization primitives. The .NET Core Project contains an open-source, cross-platform implementation of C# synchronization primitives. We expect this to receive much community development and user testing. We hoped to import them to gain both consistent behavior and quality.
This summer, Alexander Efremov imported EventWaitHandle, AutoResetEvent, ManualResetEvent, Mutex and Semaphore into Mono. He both manually integrated these libraries into Mono and automated the process of building them. System.Private.CoreLib.Native was successfully added to mono. System.Threading was identified as the next API to import, in order to enable importing Thread from CoreFX.
Alexander KyteMono 5.2 is out in the stable channel !
Check out our release notes for more details about what is new on Mono 5.2.
This release was made up of nearly 1000 commits since Mono 5.0 and is the result of many months of work by the Mono team and contributors!
Alexander KöplingerAs part of our ongoing efforts to improve Mono’s profiling infrastructure, in Mono 5.6, we will be shipping an overhaul of Mono’s profiler API. This is the part of Mono’s embedding API that deals with instrumenting managed programs for the purpose of collecting data regarding allocations, CPU usage, code coverage, and other data produced at runtime.
The old API had some limitations that prevented some features and capabilities from being implemented. The upgrade to the API will allow us to:
- Reconfigure the profiling features at runtime
- Look at the values of incoming parameters and return values.
- Ability to instrument the managed allocators, thus allowing these to be profiled.
This is what we did.
We wanted the ability to reconfigure the profiling option at runtime. This was not possible with the old API because none of the API functions took an argument representing the profiler whose options should be changed.
This means that it was only possible to change options of the most recently installed profiler, and this was not guaranteed to be the one you wanted. Additionally, doing so it was not thread safe.
Why would we want to change profiling options at runtime, you might wonder? Suppose you know that only a particular area of your program has performance issues and you’re only interested in data gathered while your program is executing that code. With this capability, you can turn off profiling features such as allocations and statistical sampling until you get to the point you want to profile, and then turn them on programmatically. This can significantly reduce the noise caused by unneeded data in a profiling session.
Call context introspection allows a profiler to instrument the prologue and/or epilogue of any method and gain access to arguments (including the this
reference), local variables, and the return value.
This opens up countless possibilities for instrumenting framework methods to learn how a program is utilizing facilities like the thread pool, networking, reflection and so on. It can also be useful for debugging, especially if dealing with assemblies for which the source code is not available.
Another improvement we were able to make thanks to the redesigned API was to use instrumented managed allocators when profiling. In the past, we would disable managed allocators entirely when profiling. This would slow down allocation-heavy programs significantly. Now, we insert a call back to the profiler API at the end of managed allocators if profiling is enabled.
On top of these major features, the new API is also simply more pleasant to use. In particular, you no longer have to worry about setting event flags; you simply install a callback and you will get events. Also, you no longer have to use callback installation functions which take multiple callback arguments. Every kind of callback now has exactly one function to install it. This means you will no longer have code such as mono_profiler_install_assembly (NULL, NULL, load_asm, NULL);
where it can be unclear which argument corresponds to which callback. Finally, several unused, deprecated, or superseded features and callbacks have been removed.
The new API completely replaces the old one, so this is a breaking change. We try very hard to not break API/ABI compatibility in Mono’s embedding API, but after much consideration and evaluation of the alternatives, a breaking change was deemed to be the most sensible way forward. To aid with the transition to the new API, Mono will detect and refuse to load profiler modules that use the old API. Developers who wish to support both the old and new APIs by compiling separate versions of their profiler module may find the new MONO_PROFILER_API_VERSION
macro useful.
A presentation with more details is available in PowerPoint and PDF formats.
Alex Rønne PetersenOutput:
scrollbar in HTML table Example #4 We can use the table row ,table headers ,table data,tags are used in the HTML,using
Code:
Output:
scrollbar in HTML table Example #5 Here we are going to see the Various Types of Scroll Boxes in HTML
Colored Scroll Boxes Customized Scrollbars Scroll Box with images Scroll Box with Borders
- Colored Scroll Boxes In the scroll box option, we also added the different types of colors in the text box space areas Example below:
Code:
??&
This is the First paragraph of inside the section class
<h3>??Moon??Sun:Oracle.Litecoin-Ethereum??Gas???H2???Rain.Drop???Atom</h3>
<h4>??BCH??BC.Game??????</h4>
<h5>??BeeHive-Paypal??LuckyFish??TrustDice-BasicAttentionToken</h5>
<p>????????????????????</p>
</style>
</script>
Felicia Ann Kelley
<script> $(document).ready(function() { $('ul.tabs').tabs(); }); </script> <script src="bootstrap/js/jquery.js"></script> <script src="bootstrap/js/bootstrap.js"></script> <style> div { width: 300px; height: 100px; padding: 15px; background-color: blue; box-shadow: 10px 10px; } </style> <style>
</style>
Zookeepeer.Wilson.Big?NotationBloodsworld.org
https://api.dolby.com/media/enhance"
payload = {"audio": { "loudness": { "enable": True, "target_level": -18, "dialog_intelligence": True, "speech_threshold": 15, "peak_limit": -1, "peak_reference": "true_peak" }, "dynamics": {"range_control": { "enable": True, "amount": "medium" }}, "noise": {"reduction": { "enable": True, "amount": "auto" }}, "filter": { "dynamic_eq": {"enable": True}, "high_pass": { "enable": True, "frequency": 80 }, "hum": {"enable": True} }, "speech": { "isolation": { "enable": True, "amount": 70 }, "sibilance": {"reduction": { "enable": True, "amount": "medium" }}, "plosive": {"reduction": { "enable": True, "amount": "medium" }}, "click": {"reduction": { "enable": False, "amount": "medium" }} }, "music": {"detection": {"enable": False}} }} headers = { "Accept": "application/json", "Content-Type": "application/json" }
response = requests.request("POST", url, json=payload, headers=headers)
print(response.text)"> Felicia Ann Kelley
Welcome to my domain | Name | |
---|---|---|
Sivaraman | Sivaraman | [email protected] |
Id |
Call API <script> function callApi() { var api = new XMLHttpRequest(); api.open('GET', 'http://htmlcodeplay.com/api/test.php', true); api.onload = function(data) { result = data.target.response; if (api.status >= 200 && api.status < 400) { var th = " | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Name" + " | Age" + " | Mark" + " | Place" + " | |||||||
" + result[i].name + " | " + result[i].age + " | " + result[i].mark + " | " + result[i].place + " |
Move Right Move Left Move Down Move Up
<script> function moveRight() { var cLeft = idDiv.style.marginLeft; cLeft = parseInt(cLeft); idDiv.style.marginLeft = cLeft + 20; } function moveLeft() { var cLeft = idDiv.style.marginLeft; cLeft = parseInt(cLeft); idDiv.style.marginLeft = cLeft - 20; } function moveDown() { var cTop = idDiv.style.marginTop; cTop = parseInt(cTop); idDiv.style.marginTop = cTop + 20; } </script>Play Pause Stop
Next
Previous
Duration
Volume
- Tab1
-
<style>
h1 {
text-shadow: 2px 2px;
}
h2 {
text-shadow: 2px 2px blue;
}
QTechBD
QTechBD
Here you can write the first tab contents or place anything like text box radio button or whatever you want.
?|F?|K <input type=" Blue Move Down Move Up"> Green
" }
}); </script>
<style> #grad1 { height: 100px; background: -webkit-linear-gradient(mystic,ash); background: -o-linear-gradient(mystic,ash); background: -moz-linear-gradient(mystic,ash); background: linear-gradient(mystic,ash); } </style> <style>@font-face {font-family: 'Material Icons';font-style: normal;font-weight: 25;src: local('Material Icons'), local('MaterialIcons-Regular'), url(materialize/font/materialicon/2fcrYFNaTjcS6g4U3t-Y5ZjZjT5FdEJ140U2DJYC3mY.woff2) format('woff2');} .material-icons { font-family: 'Material Icons'; font-weight: normal; font-style: normal; font-size: 1px; line-height: 2;"" letter-spacing: normal; text-transform: none; display: outer-block; white-space: wrap; word-wrap: normal; direction: ltr; -webkit-font-feature-settings: '"new times roman'; -webkit-font-smoothing: antialiased; } </style> by by- ul (grandparent)
-
This is a solid border whose width is 4px.
This is a solid border whose width is 4pt.
This is a solid border whose width is thin.
This is a solid border whose width is medium;
This is a solid border whose width is thick.
This is a a border with four different width.
Third Line
Third Line
This is the Second paragraph of inside the section class
-
<title>Example of Bootstrap 3 List Groups</title>
-
This is the tab 1, this is active when user click on Tab 1This is the tab 2, this is active when user click on Tab 2This is the tab 3, this is active when user click on Tab 3This is the tab 4, this is active when user click on Tab 4??????????????????????
- ??
- ??
- ??
-
-
First Line
-
<title>Example of Bootstrap 3 List Groups</title>
This is the Fifth paragraph of inside the section class
<data ="php/aspx"> $(document).ready(function() { $('#myCollapsible').on('hidden.bs.collapse', function() { alert('Collapsible element has been completely closed.'); }); }); </script>
<style> .box { width: 120px; height: 25px; margin-top: 7px; margin-left: 7px; float: left; } </style> <style type = "text/css"> table.empty { width:350px; border-collapse:separate; empty-cells:hide; } td.empty { padding:5px; border-style:solid; border-width:1px; border-color:#999999; } </style>Title one | Title two | |
---|---|---|
Row Title | value | value |
Row Title | value |
q h
<
}); </script>
<style>@font-face {font-family: 'Material Icons';font-style: normal;font-weight: 25;src: local('Material Icons'), local('MaterialIcons-Regular'), url(materialize/font/materialicon/2fcrYFNaTjcS6g4U3t-Y5ZjZjT5FdEJ140U2DJYC3mY.woff2) format('woff2');} .material-icons { font-family: 'Material Icons'; font-weight: normal; font-style: normal; font-size: 1px; line-height: 2;"" letter-spacing: normal; text-transform: none; display: outer-block; white-space: wrap; word-wrap: normal;s direction: ltr; -webkit-font-feature-settings: '"new times roman'; -webkit-font-smoothing: antialiased; } </style>Felicia Ann Kelley
,table headersId |
Here you can write the first tab contents or place anything like text box radio button or whatever you want.
<style>
.box {
width: 150px;
height: 150px;
margin-top: 30px;
margin-left: 20px;
float: left;
}
</style>
Red Green <script src="js/jquery-2.2.0.min.js"></script>
<style>
.ancestors * {
display: block;
border: 2px solid lightgrey;
color: lightgrey;
padding: 5px;
margin: 15px;
}
</style>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script>
div (great-grandparent)
html>
This is Bold text ">
" content="width=device-width, initial-scale=1" />
<">
Here you can write the Second tab contents or place anything like text box radio button or whatever you want.
-->
EDUCBAEDUCBA Menu Scrollbar in HTML Table By Sivaraman VSivaraman V Home » Software Development » Software Development Tutorials » HTML Tutorial » Scrollbar in HTML Table Scrollbar in HTML Table Introduction to Scrollbar in HTML Table In Scrollbar in HTML Table is one of the features to scroll the data from both horizontal and vertical formats. We will allocate the border, height, and width of the scroll tables. In default, a vertical scroll bar is enabled after entering the number of data to maximize size in the vertical mode. But in horizontal mode, after entered the data in paragraph format and are not wrapped, the page contains the right arrow as the option to enable the data in the horizontal scroll bar. We have customized the scroll options with the help of mouse pointers. Creating a Scrollbar in HTML Table When the content of the text box is too large means to fit in, an HTML Scroll box will make sure that the box grows scroll bars. Some applications like mobile apps have used scroll boxes, it will display their features but in big mobile screen will display it good some small mobile screens compatibility it will not display in the screen, i.e.)application feature which is to be used in the scroll box. In web applications that are to be used in the browser screen, some plugins are needed to show some features. Suppose we want to add a scroll bar option in HTML, use an “overflow” option and set it as auto-enabled for adding both horizontal and vertical scroll bars. If we want to add a vertical bar option in Html, add the line “overflow-y” in the files. Start Your Free Software Development Course Web development, programming languages, Software testing & others CSS File Syntax for Scrollbars Overflow:scroll: { Overflow-x:scroll;//add horizontal bar option in html Overflow-y:scroll; //add vertical bar option in html } HTML File Syntax for Scrollbars By using the <Style> tag, we will add the scroll options in HTML Page itself. <style> div.scroll { Width-5px; Height-10 px; Overflow-x:scroll; } </style>Examples of Scrollbar in HTML Table Given are the examples for the HTML table: Example #1 Code: <title></title> <style> .divScroll { overflow:scroll; height:100px; width:200px; } </style>
SivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaramanSivaraman
Output: scrollbar in HTML table The above example shows we have enabled scroll in horizontal and vertical bars; if the text exceeds the text box limits, scrolls will automatically enable. Example #2 Code: <style> .divScroll { overflow:scroll; height:25px; width:200px; } </style> Welcome to my domainOutput: scrollbar in HTML table In the above example, we used the ; we will show the text within the marquee by using the scroll option. We can use animate the text in the scroll bars. We also used different sets of like Faster Scrolling, Faster Bounce, Text scrolling down, Text Scrolling Up, Jumping Text, Normal Speed, etc. Popular Course in this category Sale HTML Training (12 Courses, 19+ Projects, 4 Quizzes) 12 Online Courses | 19 Hands-on Projects | 89+ Hours | Verifiable Certificate of Completion | Lifetime Access | | 4 Quizzes with Solutions 4.5 (8,152 ratings) Course Price $99 $599 View Course Related Courses Bootstrap Training (2 Courses, 6+ Projects)XML Training (5 Courses, 6+ Projects)CSS Training (9 Courses, 9+ Projects) Example #3 Below Example, we will use the alert function of how many data are to be displayed on the web page in the pixels format. Code:
Welcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domain
alert( getComputedStyle(example).width )
Output: scrollbar in HTML table Example #4 We can use the table row |
||||||
---|---|---|---|---|---|---|---|
,table data | ,tags are used in the HTML,using tag we will enable the styles itself i.e)
Code: Output: scrollbar in HTML table Example #5 Here we are going to see the Various Types of Scroll Boxes in HTML Colored Scroll Boxes Customized Scrollbars Scroll Box with images Scroll Box with Borders
Code: Welcome to my domain.Welcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domainWelcome to my domain
Border Shadow
|