Files
bitburner-src/markdown/bitburner.ns.ramoverride.md
Michael Taylor dcd2f33f7c CODEBASE: Update api-documentor and api-extractor (#2320)
* Update api-documentor and api-extractor. #1566 follow-up.

I have verified that the HTML/markdown table generation bug in
[#4878](https://github.com/microsoft/rushstack/issues/4878) in rushstack
for api-documentor has been fixed as per rushstack#5256. The testcase
[repro](https://github.com/catloversg/api-documenter-bug-pr-4578) now
produces the correct expected output.

I have confirmed that the generated output in bitburner from
`npm run doc` now generated HTML tables, and correctly inserts
a blank line between the </table> and the follow line (e.g. Returns).

Stylisticly it could use some whitespace, but it is correctly rendered.

This commit is only the updated packages, not the updated generated
documentation. I assume that is automatically generated by the GitHub
workflow.

* Follow up to 5f732a6f35, include `npm run doc` changed docs.

* Add missing license info

* Fix React warning

---------

Co-authored-by: CatLover <152669316+catloversg@users.noreply.github.com>
2025-09-26 14:52:39 -07:00

1.3 KiB

Home > bitburner > NS > ramOverride

NS.ramOverride() method

Change the current static RAM allocation of the script.

Signature:

ramOverride(ram?: number): number;

Parameters

Parameter

Type

Description

ram

number

(Optional) The new RAM limit to set.

Returns:

number

The new static RAM limit, which will be the old one if it wasn't changed. This means you can use no parameters to check the current ram limit.

Remarks

RAM cost: 0 GB

This acts analogously to the ramOverride parameter in runOptions, but for changing RAM in the current running script. The static RAM allocation (the amount of RAM used by ONE thread) will be adjusted to the given value, if possible. This can fail if the number is less than the current dynamic RAM limit, or if adjusting upward would require more RAM than is available on the server.

RAM usage will be rounded to the nearest hundredth of a GB, which is the granularity of all RAM calculations.